13th article about common DBA MS SQL Server errors

  • Tutorial

Foreword


This short article will give an overview of errors, the consequences of which were very tangible and that I had to deal with.

The article was written with the aim of not repeating these errors. And, as they say, negative experience is also experience and sometimes even more valuable than positive.

Mistakes


  1. Percentage increment of database files (database)

    Since file growth (whether it is data or a transaction log) is a very resource-intensive operation, it may be well-intentioned to set this growth in percentage terms. I agree, many recommendations say that it is better to set not a percentage, but a fixed increase, expressed in MB. However, it is not disclosed why this is so. Based on practice, it is not recommended to set the database file growth above 1 GB, because MS SQL Server will allocate 2 times 1 GB faster than 2 GB at once. Also, if you allocate less than 32 MB (based on practice again), then sooner or later the database itself will simply begin to hang. Well, we decided that incrementing database files is fixed from 32 to 1024 MB. But why is it still impossible to specify the growth of database files as a percentage? It turns out that as soon as the file becomes less than 1 MB, then the DBMS rounds this value to 0 MB and stops increasing this file. The result is a simple system. To find out how much the file should be enlarged, it’s enough to do an analysis per day — how many each of the files grows in MB, and set the corresponding number, but in the range from 32 to 1024 MB. Collection of statistics on the growth of database files can be obtainedas follows .
  2. A lot of foreign keys to a table

    Have you ever tried to look at a plan while deleting at least one row from a table referenced by almost hundreds of other tables? You will be surprised how many nested loops are there. And all of them are checks on foreign keys. Therefore, if the tables are large (millionaires), then it is better to turn off the foreign keys on the table in which the data will be deleted, then delete all the necessary and related data, and then turn on the foreign keys. The situation is similar with cascading updates and deletions. If there are a lot of external links (hundreds), then even deleting 1 line can lead to a long and very extensive lock.
  3. Many superfluous indexes

    It is often found in recommendations that when creating foreign keys, it is necessary to build indexes for them, especially when using these keys for connections. It is necessary to check that indexes are used, otherwise these extra indexes will only slow down any data modification operations. You can check the use of indexes with the following query:

    The code
    select DB_NAME(t.database_id)		as [DBName]
    	 , SCHEMA_NAME(obj.schema_id)	as [SchemaName]
    	 , OBJECT_NAME(t.object_id)		as [ObjectName]
    	 , obj.Type						as [ObjectType]
    	 , obj.Type_Desc				as [ObjectTypeDesc]
    	 , ind.name						as [IndexName]
    	 , ind.Type						as IndexType
    	 , ind.Type_Desc				as IndexTypeDesc
    	 , ind.Is_Unique				as IndexIsUnique
    	 , ind.is_primary_key			as IndexIsPK
    	 , ind.is_unique_constraint		as IndexIsUniqueConstraint
    	 , t.[Database_ID]
    	 , t.[Object_ID]
    	 , t.[Index_ID]
    	 , t.Last_User_Seek
    	 , t.Last_User_Scan
    	 , t.Last_User_Lookup
    	 , t.Last_System_Seek
    	 , t.Last_System_Scan
    	 , t.Last_System_Lookup
    from sys.dm_db_index_usage_stats as t
    inner join sys.objects as obj on t.[object_id]=obj.[object_id]
    inner join sys.indexes as ind on t.[object_id]=ind.[object_id] and t.index_id=ind.index_id
    where (last_user_seek	is null or last_user_seek		4 and t.[object_id]>0 --исключаются системные БД
    


  4. Misallocation of resources

    It is often seen in recommendations that it is necessary to transfer a transaction log and database data file to different data carriers. If you use RAID 10 with 4 or more SSD disks, then there is no sense in isolating files from each other. For even greater speed, if necessary, tempdb can be placed on a disk that was formed from RAM. Also, too much RAM that is provided by the DBMS will cause the latter to fill the entire memory with irrelevant query plans.
  5. Bad backups

    As it may sound corny, but you always need to not only check the created backups, but also move them to the test bench and restore. And all this must be automated, followed by notification to administrators of both problematic and successful restorations.
  6. False fault tolerance

    Before making a cluster of two or more servers, you need to make sure that the data storage system is also fault tolerant, because if the latter fails, it will reduce all fault tolerance to zero.
  7. Sophisticated diagnostics without simple checks

    If a simple system occurs, you must first check the MS SQL Server logs, and only then dig in more detail, since often all problems are written there. Not to carry out simple checks is the same as not measuring the patient's temperature, but immediately conducting complex diagnostics.
  8. Forgotten tables

    Tables can swell with unnecessary old data, which either needs to be archived into a separate database or deleted. Also, tables may stop being used. It is necessary to remember this.

These are all 8 negative experiences that I have encountered.
Do not repeat the above errors.

Sources:


» SQL Documentation
» Automation of data collection on the growth of tables and files of all databases

Also popular now: