I often get asked to look at queries and tell if they perform well (or well enough). That’s a tough question and – as always – the answer is “it depends”.
Well the common metrics that can be read from any sql trace are cpu, reads, writes and duration.
Duration being a sort of bottom-line sort of number.
I’ll introduce now the metric I like to use. It’s something I gleaned from presenters at Dev Connections.
So I often like to look at the IO, the reads and writes. In the system I work on, IO is often the bottleneck. 10,000 reads would normally be poor unless you’re returning 1,000,000 rows. 200 reads would normally be acceptable unless the query is used often and typically returns one row.
So the metric I have started looking at is this:
reads / row
For a query just take reads and divide by the rowcount. In Profiler, rowcount is a trace column that isn’t included with the standard trace.
I’m still trying to get a sense of what an acceptable range means for this metric, but it’s been useful so far. Terrible queries look terrible 1000+ reads/row. And good queries look good 0.001 reads/row.
We’ll see how useful it gets.