Michael J. Swart

September 19, 2013

My Queries For Top 20 Resource Hogs

Filed under: SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:00 pm

Takeaway: This post is for me. This post is a handy place for me to put some queries that I use often. If you find them useful too, that’s wonderful, bookmark this page with https://michaeljswart.com/go/Top20.

These queries will give the top 20 resource consumers for cached queries based on a few different metrics. I’ve posted queries like this before, and others have written many other versions of this query. All these queries are based on sys.dm_exec_query_stats.

But I’ve tweaked my own queries recently based on a couple things I’ve learned. So you could say that I gave my old queries a new coat of paint.

Here what I’ve added to these queries recently:

  • I added OPTION (RECOMPILE) to the query. It prevents these queries from showing up in their own results on quiet systems.
  • Joe Sack pointed out that query_hash can be used to aggregate queries that only differ by literal values. I’m doing that here. What’s extra awesome about this technique, is that it can also be used to aggregate the same queries that were executed in different procedures or in different databases. This is critical if we want to measure the impact of a single query on a server regardless of where the query is called.
  • I’m ignoring (for now) those queries with query_hash equal to 0x0. Again, Joe Sack points out that these are cursor fetches.
  • You may notice that the database name is sometimes null. It’s null when the query is not part of a stored procedure. I suppose if I was keen, I could pick out a arbitrary database name from the accompanying query plan.
  • Update September 27, 2013: Actually, In the comment section of this post, Ian Stirk gave me a really nice way to retrieve the database name from the dmvs. I’ve updated the queries below accordingly.
  • Remember that the stats found in sys.dm_exec_query_stats are only as good as what’s in cache. The cache is not a permanent store. It changes and it gets reset every server restart.

(Download all queries)

Or check them out individually below:

Top 20 Executed Queries

These are queries that run often. Frequent queries can be vulnerable to concurrency problems.

;with frequent_queries as
(
    select top 20 
        query_hash, 
        sum(execution_count) executions
    from sys.dm_exec_query_stats 
    where query_hash <> 0x0
    group by query_hash
    order by sum(execution_count) desc
)
select @@servername as server_name,
    coalesce(db_name(st.dbid), db_name(cast(pa.value AS INT)), 'Resource') AS [DatabaseName],
    coalesce(object_name(st.objectid, st.dbid), '<none>') as [object_name],
    qs.query_hash,
    qs.execution_count,
    executions as total_executions_for_query,
    SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
        (CASE 
            WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX), st.text)) * 2
            ELSE qs.statement_end_offset + 2
            END - qs.statement_start_offset) / 2) as sql_text,
    qp.query_plan
from sys.dm_exec_query_stats qs
join frequent_queries fq
    on fq.query_hash = qs.query_hash
cross apply sys.dm_exec_sql_text(qs.sql_handle) st
cross apply sys.dm_exec_query_plan (qs.plan_handle) qp
outer apply sys.dm_exec_plan_attributes(qs.plan_handle) pa
where pa.attribute = 'dbid'
order by fq.executions desc,
    fq.query_hash,
    qs.execution_count desc
option (recompile)

Top 20 I/O Consumers

Specifically logical reads and writes. Still my favorite metric.

;with high_io_queries as
(
    select top 20 
        query_hash, 
        sum(total_logical_reads + total_logical_writes) io
    from sys.dm_exec_query_stats 
    where query_hash <> 0x0
    group by query_hash
    order by sum(total_logical_reads + total_logical_writes) desc
)
select @@servername as servername,
    coalesce(db_name(st.dbid), db_name(cast(pa.value AS INT)), 'Resource') AS [DatabaseName],
    coalesce(object_name(st.objectid, st.dbid), '<none>') as [object_name],
    qs.query_hash,
    qs.total_logical_reads + total_logical_writes as total_io,
    qs.execution_count,
    cast((total_logical_reads + total_logical_writes) / (execution_count + 0.0) as money) as average_io,
    io as total_io_for_query,
    SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
        (CASE 
            WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),st.[text])) * 2
            ELSE qs.statement_end_offset + 2
            END - qs.statement_start_offset) / 2) as sql_text,
    qp.query_plan
from sys.dm_exec_query_stats qs
join high_io_queries fq
    on fq.query_hash = qs.query_hash
cross apply sys.dm_exec_sql_text(qs.sql_handle) st
cross apply sys.dm_exec_query_plan (qs.plan_handle) qp
outer apply sys.dm_exec_plan_attributes(qs.plan_handle) pa
where pa.attribute = 'dbid'
order by fq.io desc,
    fq.query_hash,
    qs.total_logical_reads + total_logical_writes desc
option (recompile)

Top 20 CPU Consumers

Another popular metric for measuring work done.

;with high_cpu_queries as
(
    select top 20 
        query_hash, 
        sum(total_worker_time) cpuTime
    from sys.dm_exec_query_stats 
    where query_hash <> 0x0
    group by query_hash
    order by sum(total_worker_time) desc
)
select @@servername as server_name,
    coalesce(db_name(st.dbid), db_name(cast(pa.value AS INT)), 'Resource') AS [DatabaseName],
    coalesce(object_name(st.objectid, st.dbid), '<none>') as [object_name],
    qs.query_hash,
    qs.total_worker_time as cpu_time,
    qs.execution_count,
    cast(total_worker_time / (execution_count + 0.0) as money) as average_CPU_in_microseconds,
    cpuTime as total_cpu_for_query,
    SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
        (CASE 
            WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),st.[text])) * 2
            ELSE qs.statement_end_offset + 2
            END - qs.statement_start_offset) / 2) as sql_text,
    qp.query_plan
from sys.dm_exec_query_stats qs
join high_cpu_queries hcq
    on hcq.query_hash = qs.query_hash
cross apply sys.dm_exec_sql_text(qs.sql_handle) st
cross apply sys.dm_exec_query_plan (qs.plan_handle) qp
outer apply sys.dm_exec_plan_attributes(qs.plan_handle) pa
where pa.attribute = 'dbid'
order by hcq.cpuTime desc,
    hcq.query_hash,
    qs.total_worker_time desc
option (recompile)

Top 20 Queries By Elapsed Time

Results found here, but not in the results of the other queries, usually suffer from things like excessive blocking or ASYNC_NETWORK_IO.

;with long_queries as
(
    select top 20 
        query_hash, 
        sum(total_elapsed_time) elapsed_time
    from sys.dm_exec_query_stats 
    where query_hash <> 0x0
    group by query_hash
    order by sum(total_elapsed_time) desc
)
select @@servername as server_name,
    coalesce(db_name(st.dbid), db_name(cast(pa.value AS INT)), 'Resource') AS [DatabaseName],
    coalesce(object_name(st.objectid, st.dbid), '<none>') as [object_name],
    qs.query_hash,
    qs.total_elapsed_time,
    qs.execution_count,
    cast(total_elapsed_time / (execution_count + 0.0) as money) as average_duration_in_microseconds,
    elapsed_time as total_elapsed_time_for_query,
    SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
        (CASE 
            WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),st.[text])) * 2
            ELSE qs.statement_end_offset + 2
            END - qs.statement_start_offset) / 2) as sql_text,
    qp.query_plan
from sys.dm_exec_query_stats qs
join long_queries lq
    on lq.query_hash = qs.query_hash
cross apply sys.dm_exec_sql_text(qs.sql_handle) st
cross apply sys.dm_exec_query_plan (qs.plan_handle) qp
outer apply sys.dm_exec_plan_attributes(qs.plan_handle) pa
where pa.attribute = 'dbid'
order by lq.elapsed_time desc,
    lq.query_hash,
    qs.total_elapsed_time desc
option (recompile)

By the way. This post seems to be really popular. So I waive any copyright I have on these four queries. Copy them without attribution wherever you like. Profit if you can. Go nuts.

36 Comments »

  1. Regarding: “After a while, everything could use a refresh”.
    BTW, I’m aware of the potential irony here. I’ve been using the same wordpress template and blog post style for nearly three years… It still works for now. I may change things up, but have no immediate plans to.

    Comment by Michael J. Swart — September 19, 2013 @ 1:17 pm

  2. This is a good post, Michael. I think many of us do the same thing, that is, makes posts to document our admin queries. I love seeing how other people do things, so I can steal any good ideas I come across. I like how you added the ‘option (recompile)’ bit in there to keep the query from reporting on itself. Very sublime. I may have to steal that…

    Comment by Steven Ormrod — September 19, 2013 @ 2:45 pm

  3. By all means Steven. Steal away.
    The fact that it’s useful to others is a really nice bonus. Because it truly was written for my own benefit.
    Cheers,

    Comment by Michael J. Swart — September 19, 2013 @ 2:49 pm

  4. Thank you. Just like a first aid kit. Small, compact, useful.

    Comment by Matjaz Justin — September 24, 2013 @ 3:01 am

  5. RE: “You may notice that the database name is sometimes null…”, you can get the database name via the DMF sys.dm_exec_plan_attributes.

    Example usage:

    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
     
    SELECT TOP 20
            st.text AS [SQL] ,
            cp.cacheobjtype ,
            cp.objtype ,
            COALESCE(DB_NAME(st.dbid), DB_NAME(CAST(pa.value AS INT)) + '*',
                     'Resource') AS [DatabaseName] ,
            cp.usecounts AS [Plan usage] ,
            qp.query_plan
    FROM    sys.dm_exec_cached_plans cp
            CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
            CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
            OUTER APPLY sys.dm_exec_plan_attributes(cp.plan_handle) pa
    WHERE   pa.attribute = 'dbid'
    ORDER BY cp.usecounts DESC ;

    Thanks
    Ian Stirk
    Author of DMVs In Action http://www.amazon.com/SQL-Server-DMVs-Action-Management/dp/1935182730
    Free chapters here: http://www.manning.com/stirk/

    Comment by Ian Stirk — September 26, 2013 @ 10:35 am

  6. […] My Queries For Top 20 Resource Hogs – Michael J. Swart (Blog|Twitter) shares some handy queries. […]

    Pingback by (SFTW) SQL Server Links 27/09/13 • John Sansom — September 27, 2013 @ 5:59 am

  7. Thanks Ian!

    I thought it was interesting to ask what database is returned for the following query:

    use msdb;
    select * from tempdb.sys.tables
    union all
    select * from model.sys.tables

    Turns out it’s msdb and that works very well for me. I’ll be updating my post soon.

    Comment by Michael J. Swart — September 27, 2013 @ 10:28 am

  8. It should be noted that query_hash wasn’t aded to sys.dm_exec_query_stats until SQL Server 2008.

    Comment by Mark Freeman — September 30, 2013 @ 1:19 pm

  9. Thank you !!!

    Comment by Rick Willemain — October 1, 2013 @ 11:11 am

  10. […] https://michaeljswart.com/2013/09/my-queries-for-top-20-resource-hogs/ […]

    Pingback by Top 20 Resource Hog Queries | Simon Learning SQL Server — October 22, 2013 @ 9:22 am

  11. Nothing less than awesome. Huge THANK YOU!!!

    Comment by Ayman El-Ghazali — October 23, 2013 @ 10:55 am

  12. For your query on “Top 20 Queries By Elapsed Time” average_duration_in_ms is actually in Microseconds. I kept scratching my head when I calculated it in Milliseconds since my total elapsed time was 195 days! Doesn’t reduce from the awesomeness of this post in any way 🙂

    Comment by Ayman El-Ghazali — October 23, 2013 @ 12:05 pm

  13. Thanks Ayman.
    I do see that I used ms when microseconds was meant. You can see that I already reported “microseconds” inside the CPU query.
    I guess I could update the query to average_duration_in_µs 🙂
    But I’ll update the post to use the full word…

    Comment by Michael J. Swart — October 23, 2013 @ 1:22 pm

  14. I was focusing on the Elapsed time query today and when I did the math my jaw dropped. Luckily I did some research and came back to your post and realized I was using the wrong units. 195 day query turned out to be 3 hours. Still a very very long time but not half a year 🙂

    Enjoy your day, thanks for your regular useful and enlightening contributions to the community.

    Comment by Ayman El-Ghazali — October 23, 2013 @ 1:26 pm

  15. […] https://michaeljswart.com/2013/09/my-queries-for-top-20-resource-hogs/ […]

    Pingback by perf tuning by resource usage | My MSSQL snippets — January 13, 2014 @ 11:07 am

  16. For case sensitive systems, you will need to change the case of the query aliases so they match.

    Comment by Barbara Roy — May 27, 2015 @ 10:50 am

  17. Thanks for the feedback Barbara,
    I’ll update that soon.

    Comment by Michael J. Swart — May 27, 2015 @ 11:15 am

  18. I’ve updated the queries to account for case-sensitive systems.

    Comment by Michael J. Swart — May 29, 2015 @ 12:03 pm

  19. They need to be corrected to run on servers which are not case insensitive.

    Such as:

    with frequent_queries as
    (
    select top 20
    query_hash,
    sum(execution_count) executions
    from sys.dm_exec_query_stats
    where query_hash 0x0
    group by query_hash
    order by sum(execution_count) desc
    )
    select @@servername as server_name,
    db_name(ST.dbid) as database_name,
    object_name(ST.objectid, ST.dbid) as [object_name],
    QS.query_hash,
    QS.execution_count,
    executions as total_executions_for_query,
    SUBSTRING(ST.text,(QS.statement_start_offset + 2) / 2,
    (CASE
    WHEN QS.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX),ST.text)) * 2
    ELSE QS.statement_end_offset
    END – QS.statement_start_offset) / 2) as sql_text,
    qp.query_plan
    from sys.dm_exec_query_stats QS
    join frequent_queries fq
    on fq.query_hash = QS.query_hash
    cross apply sys.dm_exec_sql_text(QS.sql_handle) ST
    cross apply sys.dm_exec_query_plan (QS.plan_handle) qp
    order by fq.executions desc,
    fq.query_hash,
    QS.execution_count desc
    option (recompile)

    Thanks for the queries.

    Comment by Ron Sexton — February 2, 2016 @ 6:20 pm

  20. Hi Michael,

    Still coming in useful, thanks for blogging and even more for updating.
    One point is that sql_text loses the last character. I think that is because statement_end_offset is zero based so needs a +2 just like statement_start_offset e.g.

    SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
    (CASE
    WHEN qs.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX),st.[text])) * 2
    ELSE qs.statement_end_offset + 2
    END – qs.statement_start_offset) / 2) as sql_text

    Thank you again,
    Paul

    Comment by Paul H — April 19, 2016 @ 10:08 am

  21. Thanks Paul, I wonder how I never noticed that before.
    Fixed!

    Comment by Michael J. Swart — April 20, 2016 @ 9:31 am

  22. The actual time units probably depend on some SQL Server settings, because in my environment all times are in microseconds

    Comment by Anton — July 12, 2017 @ 7:23 am

  23. Hi Anton, Which time units exactly? I specify microseconds too. I think Microsoft switched to reporting duration in microseconds instead of milliseconds somewhere around version 2008 or 2012.

    Comment by Michael J. Swart — July 16, 2017 @ 2:09 pm

  24. When running the top-20 by elapsed time query on MSSQL 10.50.4042.0 I get: Incorrect syntax near “.” at line 28. This points to this line: “cross apply sys.dm_exec_sql_text(qs.sql_handle) st”. Is my MSSQL server too old?

    Comment by Julia — April 10, 2018 @ 4:44 am

  25. I do expect the scripts to run at that version, Could you try something for me?
    Could you try adjusting the syntax from sql_handle to [sql_handle].

    Comment by Michael J. Swart — April 10, 2018 @ 8:46 am

  26. That didn’t chagne anything.

    When, however, I modifed the offending line to:

    cross apply sys.dm_exec_sql_text([qs].[sql_handle]) st

    The error changed to: Incorrect syntax near “qs”.

    Thanks for helping!

    Comment by Julia — April 10, 2018 @ 8:53 am

  27. Hi Julia, Two more ideas, Could you email me the query you have in your management studio? (I just want to rule out copy-paste issues). Like maybe my website changed brackets to something not a bracket http://www.fileformat.info/info/unicode/char/2768/index.htm

    Other ideas include using the other queries on this page or the queries as downloaded in the zip file.

    Michael

    Comment by Michael J. Swart — April 10, 2018 @ 9:10 am

  28. I have tried the queries from the .zip file, and they all terminate with the same error. They do work on other servers, though… Something must be wrong with that specific instance of MSSQL server.

    Comment by Julia — April 10, 2018 @ 10:46 am

  29. Michael,

    I can send you a screenshot and whatever other information you should need.

    Comment by Julia — April 10, 2018 @ 12:17 pm

  30. This query works:

    SELECT * FROM sys.dm_exec_sql_text(1)

    And this doesn’t:

    SELECT * FROM sys.dm_exec_query_stats qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle)

    Comment by Julia — April 11, 2018 @ 6:00 am

  31. That’s really interesting.
    Look at the version (and maybe compatibility level). Maybe cross apply or that dmv is only supported in certain versions. Increase compatibility level if it’s not at the current version.

    If that doesn’t work. Maybe it’s a good question for dba.stackexchange.

    Comment by Michael J. Swart — April 11, 2018 @ 7:44 am

  32. Setting the compatibility level to 100 did it. Mickle thanks!

    Comment by Julia — April 11, 2018 @ 10:29 am

  33. Your queries may show many rows with the same query hash, whereas most of the time it seems much more convenient and informative to have them grouped and the statistics summed or averaged depending on their semantics. Do not you think so?

    Comment by Julia — April 11, 2018 @ 11:11 am

  34. Hi Julia,
    I’m glad the compatibility mode helped.

    The rows that have the same query hash are broken down by database.
    It’s just extra information. The statistics summed that you asked about is what’s represented in the “total” column.

    You may also want to google for sp_blitzCache. That tool provides similar information and you may like that tool better, it’s maintained better and has richer information.

    Comment by Michael J. Swart — April 11, 2018 @ 11:39 am

  35. Thanks for the queries,

    My question is do the queries collecting the live data or the historical data?

    Comment by Yared — February 20, 2020 @ 10:44 am

  36. They’re based on the queries in cache. So it’s closer to historical.

    Comment by Michael J. Swart — February 20, 2020 @ 12:22 pm

RSS feed for comments on this post. TrackBack URL

Leave a comment

Powered by WordPress