Writing efficient queries following best practices is a sign of a good database developer. However, for it to perform at the intended level there are more knobs to turn. Let us take a brief tour of checklists for everything in the database world to run smoothly and efficiently.
Database Design
Unnecessary referential checks in an OLTP system could slow the performance and as simple as it may sound, the choice of data types for columns in tables will have some impact.
- Review table structures, indexes, and constraints and evaluate the data access pattern changes often. Do you really need all of that foreign keys and constraints in a logging database?
- Minimize the use of triggers due to their hidden cost.
- Create indexes following best practices. It is almost a given now to have columnstore indexes in the data warehouse. Remember even OLTP systems can benefit from nonclustered columnstore indexes.
Review Database Server Configuration
- Check your SQL Server memory configuration – SQL Server uses the memory for resources like buffer pool, caches, etc. Leaving the default maximum server memory will certainly cause some stress on the OS and other applications. SQL Server is a beast and hangs on to the memory it already has and will only release to the OS if needed. Based on the resources that are available, configure the values for Min/Max server memory for your SQL Server. If you have stacked instances, then configure the memory for each instance and make sure the total is not higher than the actual available memory to the server.
- Max DOP and Cost Threshold for Parallelism – Most of the systems today have multiple processors. To strike the right balance between the load on the machine and queries going parallel, test to get the appropriate values.
Keep those statistics updated
Statistics if outdated can degrade query performance as it impacts cardinality estimates. SQL Engine with the start of 2016 has made it easier to manage statistics but for larger tables statistics maintenance needs to be tuned.
- Auto update statistics – Leverage SQL Server’s options and implement auto stat creation and updates
- For larger tables, use AUTO_UPDATES_STATISTICS_ASYNC to avoid queries being blocked. If the tables are partitioned, enable INCREMENTAL_STATS. You can also have a separate SQL Agent Job to auto-update statistics.
- Find outdated statistics and refresh them
Keep minimum index defragmentation
Indexes help queries run faster but if implementation isn’t right, they can bring the database to its knees and you will pay the price.
- Drop underutilized indexes and create missing indexes – Because of the changes in query patterns or just developers creating too many indexes, identify indexes that are no longer used by the SQL engine and drop them. Use the right tools to identify and create missing indexes.
- Repair index fragmentation – The key is finding the level of fragmentation as that helps pick the right solution to repair it.
I never recommend setting AUTO_CLOSE and AUTO_SHRINK to on. Both of these are resource hogs and I have only seen and heard horror stories about them.
Honorary mentions:
Database backups & schedules – Compress that backup
Database files and filegroups are equally critical for optimal database performance. This reminds me of a horror story back in 2013 when ‘a dude‘ (wink wink) had data and transaction log files all on C drive.
Some of the top-level things you can quickly browse to identify if anything needs a more serious look.