Earlier this week I was tasked with troubleshooting some SQL Server performance issues. I am developing a highly multi-threaded application that is calling updates and inserts asynchronously to my SQL Server instance. I was running into some issues with some threads blocking other threads from running and also trying to figure out if we needed to re-evaluate our indexing strategy on some of our tables.
Troubleshooting SQL Server is not always fun to do. In my experience, its not always intuitive to find the root of your issues and getting information out of some of the system diagnostic tables can be a pain if you are not familiar with them.
I am writing this post to share what I have learned over the last few days and to hopefully help anyone else that finds themselves in a similar situation to the one that I was in. My goal here is to provide you with some basic strategies and queries that you can use (SQL Server specific) to help you diagnose your SQL Server performance issues. (You will need the View_Server_State permission to run many of these queries)
My problem: My .Net application was throwing seemingly random timeout exceptions when trying to establish multiple connections to SQL Server in a multi-threaded environment. I needed to see what open connections there were to the particular database that I was concerned with on a particular server.
For those of you not familiar with some of SQL Servers details, the system stored procedure sp_who2 will basically give you a dump of all this information (and much more) for all the databases running on that server instance.
This is a great starting place to start troubleshooting issues, but many times trying to find what you are looking for from this stored proc is like trying to find a needle in a haystack and, to make it worse, you cannot filter stored proc results by a WHERE predicate.
However, what you can do, is create a virtual table that has the columns of importance from the stored proc and then query this virtual table just like you would query any other table. For example:
DECLARE @Table TABLE( SPID INT, Status VARCHAR(MAX), LOGIN VARCHAR(MAX), HostName VARCHAR(MAX), BlkBy VARCHAR(MAX), DBName VARCHAR(MAX), Command VARCHAR(MAX), CPUTime INT, DiskIO INT, LastBatch VARCHAR(MAX), ProgramName VARCHAR(MAX), SPID_1 INT, REQUESTID INT ) INSERT INTO @Table EXEC sp_who2 SELECT * FROM @Table t WHERE -- YOUR PREDICATE GOES HERE
The next query that I have listed below is an easy and efficient way to kill existing connections to a database that has many open or long running connections connected to it. You may find it useful to run this when you have a connection that is locking a table and you aren’t sure why this connection has remained open. Be warned, however, that running this query will indeed close all connections, so make sure no one is connected to this database other than yourself. (You can also find out who is connected to which database and how many connections they have open through querying the stored proc that I mentioned above.)
ALTER DATABASE YourDatabase SET SINGLE_USER WITH ROLLBACK AFTER 60 SECONDS ALTER DATABASE YourDatabase SET MULTI_USER
These queries are great for some quick troubleshooting, but many times a developer is going to need much more detail to find the real root of their problem. Specifically, concerning performance, a developer will want to know details such as an application’s most expensive queries, index performance, stored proc performance, and most executed queries. Finding these can be a real pain if you are not familiar with SQL Server’s system tables that are tracking all this information for you. However, hopefully by exploring some of the queries that I have listed below (for the most common problems), you will find some insight into your particular issue.
What it does: Lists the top statements by average input/output usage for the current database
SELECT TOP(50) OBJECT_NAME(qt.objectid) AS [SP Name], (qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count AS [Avg IO], SUBSTRING(qt.[text],qs.statement_start_offset/2, (CASE WHEN qs.statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(max), qt.[text])) * 2 ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) AS [Query Text] FROM sys.dm_exec_query_stats AS qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt WHERE qt.[dbid] = DB_ID() ORDER BY [Avg IO] DESC OPTION (RECOMPILE);
Notes for how to use: Helps you find the most expensive statements for I/O by SP
What it does: Possible Bad NC Indexes (writes > reads)
SELECT OBJECT_NAME(s.[object_id]) AS [Table Name], i.name AS [Index Name], i.index_id, user_updates AS [Total Writes], user_seeks + user_scans + user_lookups AS [Total Reads], user_updates - (user_seeks + user_scans + user_lookups) AS [Difference] FROM sys.dm_db_index_usage_stats AS s WITH (NOLOCK) INNER JOIN sys.indexes AS i WITH (NOLOCK) ON s.[object_id] = i.[object_id] AND i.index_id = s.index_id WHERE OBJECTPROPERTY(s.[object_id],'IsUserTable') = 1 AND s.database_id = DB_ID() AND user_updates > (user_seeks + user_scans + user_lookups) AND i.index_id > 1 ORDER BY [Difference] DESC, [Total Writes] DESC, [Total Reads] ASC OPTION (RECOMPILE);
Notes for how to use:
— Look for indexes with high numbers of writes and zero or very low numbers of reads
— Consider your complete workload
— Investigate further before dropping an index
What it does: Missing Indexes current database by Index Advantage
SELECT user_seeks * avg_total_user_cost * (avg_user_impact * 0.01) AS [index_advantage], migs.last_user_seek, mid.[statement] AS [Database.Schema.Table], mid.equality_columns, mid.inequality_columns, mid.included_columns, migs.unique_compiles, migs.user_seeks, migs.avg_total_user_cost, migs.avg_user_impact FROM sys.dm_db_missing_index_group_stats AS migs WITH (NOLOCK) INNER JOIN sys.dm_db_missing_index_groups AS mig WITH (NOLOCK) ON migs.group_handle = mig.index_group_handle INNER JOIN sys.dm_db_missing_index_details AS mid WITH (NOLOCK) ON mig.index_handle = mid.index_handle WHERE mid.database_id = DB_ID() -- Remove this to see for entire instance ORDER BY index_advantage DESC OPTION (RECOMPILE);
Notes on how to use:
— Look at last user seek time, number of user seeks to help determine source and importance
— SQL Server is overly eager to add included columns, so beware
— Do not just blindly add indexes that show up from this query!!!
What it does: Get fragmentation info for all indexes above a certain size in the current database
— Note: This could take some time on a very large database
SELECT DB_NAME(database_id) AS [Database Name], OBJECT_NAME(ps.OBJECT_ID) AS [Object Name], i.name AS [Index Name], ps.index_id, index_type_desc, avg_fragmentation_in_percent, fragment_count, page_count FROM sys.dm_db_index_physical_stats( NULL,NULL, NULL, NULL ,'LIMITED') AS ps INNER JOIN sys.indexes AS i ON ps.[object_id] = i.[object_id] AND ps.index_id = i.index_id WHERE database_id = DB_ID() --AND page_count > 500 ORDER BY avg_fragmentation_in_percent DESC OPTION (RECOMPILE);
Notes on how to use:
— Helps determine whether you have fragmentation in your relational indexes
— and how effective your index maintenance strategy is
That’s all from me today! I hope this helped. You might want to just bookmark this page if you are working with SQL Server diagnostics a lot. It is a pain to write these queries from scratch all of the time.
If you enjoyed this post please subscribe to my blog at jasonroell.com! Take care!
One thought on “SQL Server Performance Debugging Tips”