Michael J. Swart

August 9, 2022

Formatting Binary(10) LSN Values For Use In sys.fn_dblog()

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 3:06 pm

System procedures like sp_replincrementlsn and system functions like fn_cdc_get_min_lsn and fn_cdc_get_max_lsn return values that are of type binary(10).

These values represent LSNs, Log Sequence Numbers which are an internal way to represent the ordering of transaction logs.

Typically as developers, we don’t care about these values. But when we do want to dig into the transaction log, we can do so with sys.fn_dblog which takes two optional parameters. These parameters are LSN values which limit the results of sys.fn_dblog. But the weird thing is that sys.fn_dblogis a function whose LSN parameters are NVARCHAR(25).

The function sys.fn_dblog doesn’t expect binary(10) values for its LSN parameters, it wants the LSN values as a formatted string, something like: 0x00000029:00001a3c:0002.

Well, to convert the binary(10) LSN values into the format expected by sys.fn_dblog, I came up with this function:

CREATE OR ALTER FUNCTION dbo.fn_lsn_to_dblog_parameter(
    @lsn BINARY(10)
)
RETURNS NVARCHAR(25)
AS 
BEGIN
  RETURN
    NULLIF(
      STUFF (
        STUFF (
          '0x' + CONVERT(NVARCHAR(25), @lsn, 2),
          11, 0, ':' ),
        20, 0, ':' ),
      '0x00000000:00000000:0000'
    )
END
GO

Example

I can increment the LSN once with a no-op and get back the lsn value with sp_replincrementlsn.
I can then use fn_lsn_to_dblog_parameter to get an LSN string to use as parameters to sys.fn_dblog.
This helps me find the exact log entry in the transaction that corresponds to that no-op:

DECLARE @lsn binary(10);
DECLARE @lsn_string nvarchar(25)
exec sp_replincrementlsn @lsn OUTPUT;
SET @lsn_string = dbo.fn_lsn_to_dblog_parameter(@lsn);
 
select @lsn_string as lsn_string, [Current LSN], Operation
from sys.fn_dblog(@lsn_string, @lsn_string);

March 18, 2022

UPSERT Requires a Unique Index

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 10:11 am

To avoid deadlocks when implementing the upsert pattern, make sure the index on the key column is unique. It’s not sufficient that all the values in that particular column happen to be unique. The index must be defined to be unique, otherwise concurrent queries can still produce deadlocks.

Say I have a table with an index on Id (which is not unique):

CREATE TABLE dbo.UpsertTest(
	Id INT NOT NULL,
	IdString VARCHAR(100) NOT NULL,
	INDEX IX_UpsertTest CLUSTERED (Id)
)

I implement my test UPSERT procedure the way I’m supposed to like this:

CREATE OR ALTER PROCEDURE dbo.s_DoSomething  
AS 
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION 
	DECLARE @Id BIGINT = DATEPART(SECOND, GETDATE());
	DECLARE @IdString VARCHAR(100) = CAST(@Id AS VARCHAR(100)); 
 
	IF EXISTS ( 
		SELECT * 
		FROM dbo.UpsertTest WITH (UPDLOCK) 
		WHERE Id = @Id 
	) 
	BEGIN 
		UPDATE dbo.UpsertTest 
		SET IdString = @IdString 
		WHERE Id = @Id; 
	END 
	ELSE 
	BEGIN 
		INSERT dbo.UpsertTest (Id, IdString) 
		VALUES (@Id, @IdString); 
	END; 
COMMIT

When I exercise this procedure concurrently with many threads it produces deadlocks! I can use extended events and the output from trace flag 1200 to find out what locks are taken and what order.

What Locks Are Taken?

It depends on the result of the IF statement. There are two main scenarios to look at. Either the row exists or it doesn’t.

Scenario A: The Row Does Not Exist (Insert)
These are the locks that are taken:

    For the IF EXISTS statement:

    • Acquire Range S-U lock on resource (ffffffffffff) which represents “infinity”

    For the Update statement:

    • Acquire RangeI-N lock on resource (ffffffffffff)
    • Acquire X lock on resource (66467284bfa8) which represents the newly inserted row

Insert Scenario

Scenario B: The Row Exists (Update)
The locks that are taken are:

    For the IF EXISTS statement:

    • Acquire Range S-U lock on resource (66467284bfa8)

    For the Update statement:

    • Acquire RangeX-X lock on resource (66467284bfa8)
    • Acquire RangeX-X lock on resource (ffffffffffff)

Update Scenario

Scenario C: The Row Does Not Exist, But Another Process Inserts First (Update)
There’s a bonus scenario that begins just like the Insert scenario, but the process is blocked waiting for resource (ffffffffffff). Once it finally acquires the lock, the next locks that are taken look the same as the other Update scenario. The locks that are taken are:

    For the IF EXISTS statement:

    • Wait for Range S-U lock on resource (ffffffffffff)
    • Acquire Range S-U lock on resource (ffffffffffff)
    • Acquire Range S-U lock on resource (66467284bfa8)

    For the Update statement:

    • Acquire RangeX-X lock on resource (66467284bfa8)
    • Acquire RangeX-X lock on resource (ffffffffffff)

Update After Wait Scenario

The Deadlock

And when I look at the deadlock graph, I can see that the two update scenarios (Scenario B and C) are fighting:
Scenario B:

  • Acquire RangeX-X lock on resource (66467284bfa8) during UPDATE
  • Blocked RangeX-X lock on resource (ffffffffffff) during UPDATE

Scenario C:

  • Acquire RangeS-U lock on resource (ffffffffffff) during IF EXISTS
  • Blocked RangeS-U lock on resource (66467284bfa8) during IF EXISTS

Why Isn’t This A Problem With Unique Indexes?

To find out, let’s take a look at one last scenario where the index is unique:
Scenario D: The Row Exists (Update on Unique Index)

    For the IF EXISTS statement:

    • Acquire U lock on resource (66467284bfa8)

    For the Update statement:

    • Acquire X lock on resource (66467284bfa8)

Visually, I can compare scenario B with Scenario D:
Update Two Scenarios

When the index is not unique, SQL Server has to take key-range locks on either side of the row to prevent phantom inserts, but it’s not necessary when the values are guaranteed to be unique! And that makes all the difference. When the index is unique, no lock is required on resource (ffffffffffff). There is no longer any potential for a deadlock.

Solution: Define Indexes As Unique When Possible

Even if the values in a column are unique in practice, you’ll help improve concurrency by defining the index as unique. This tip can be generalized to other deadlocks. Next time you’re troubleshooting a deadlock involving range locks, check to see whether the participating indexes are unique.

This quirk of requiring unique indexes for the UPSERT pattern is not unique to SQL Server, I notice that PostgreSQL requires a unique index when using their “ON CONFLICT … UPDATE” syntax. This is something they chose to do very deliberately.

Other Things I Tried

This post actually comes from a real problem I was presented. It took a while to reproduce and I tried a few things before I settled on making my index unique.

Lock More During IF EXISTS?
Notice that there is only one range lock taken during the IF EXISTS statement, but there are two range needed for the UPDATE statement. Why is only one needed for the EXISTS statement? If extra rows get inserted above the row that was read, that doesn’t change the answer to EXISTS. So it’s technically not a phantom read and so SQL Server doesn’t take that lock.

So what if I changed my IF EXISTS to

IF ( 
	SELECT COUNT(*)
	FROM dbo.UpsertTest WITH (UPDLOCK) 
	WHERE Id = @Id 
) > 0

That IF statement now takes two range locks which is good, but it still gets tripped up with Scenario C and continues to deadlock.

Update Less?
Change the update statement to only update one row using TOP (1)

UPDATE TOP (1) dbo.UpsertTest 
SET IdString = @IdString
WHERE Id = @Id;

During the update statement, this only requires one RangeX-X lock instead of two. And that technique actually works! I was unable to reproduce deadlocks with TOP (1). So it is indeed a candidate solution, but making the index unique is still my preferred method.

January 19, 2022

Measure the Effect of “Cost Threshold for Parallelism”

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 10:40 am

The configuration setting cost threshold for parallelism has a default value of 5. As a default value, it’s probably too low and should be raised. But what benefit are we hoping for? And how can we measure it?

The database that I work with is a busy OLTP system with lots of very frequent, very inexpensive queries and so I don’t like to see any query that needs to go parallel.

What I’d like to do is raise the configuration cost threshold to something larger and look at the queries that have gone from multi-threaded to single-threaded. I want to see that these queries become cheaper on average. By cheaper I mean consume less cpu. I expect the average duration of these queries to increase.

How do I find these queries? I can look in the cache. The view sys.dm_exec_query_stats can tell me if a query plan is parallel, and I can look into the plans for the estimated cost. In my case, I have relatively few parallel queries. Only about 300 which means the xml parsing piece of this query runs pretty quickly.

Measure the Cost of Parallel Queries

WITH XMLNAMESPACES (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
SELECT 
	sql_text.[text] as sqltext,
	qp.query_plan,
	xml_values.subtree_cost as estimated_query_cost_in_query_bucks,
	qs.last_dop,
	CAST( qs.total_worker_time / (qs.execution_count + 0.0) as money ) as average_query_cpu_in_microseconds,
	qs.total_worker_time,
	qs.execution_count,
	qs.query_hash,
	qs.query_plan_hash,
	qs.plan_handle,
	qs.sql_handle	
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
CROSS APPLY sys.dm_exec_query_plan (qs.plan_handle) qp
CROSS APPLY 
	(
		SELECT SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
		(CASE 
			WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),st.[text])) * 2
			ELSE qs.statement_end_offset + 2
			END - qs.statement_start_offset) / 2)
	) as sql_text([text])
OUTER APPLY 
	( 
		SELECT 
			n.c.value('@QueryHash', 'NVARCHAR(30)')  as query_hash,
			n.c.value('@StatementSubTreeCost', 'FLOAT')  as subtree_cost
		FROM qp.query_plan.nodes('//StmtSimple') as n(c)
	) xml_values
WHERE qs.last_dop > 1
AND sys.fn_varbintohexstr(qs.query_hash) = xml_values.query_hash
AND execution_count > 10
ORDER BY xml_values.subtree_cost
OPTION (RECOMPILE);

What Next?

Keep track of the queries you see whose estimated subtree cost is below the new threshold you’re considering. Especially keep track of the query_hash and the average_query_cpu_in_microseconds.
Then make the change and compare the average_query_cpu_in_microseconds before and after. Remember to use the sql_hash as the key because the plan_hash will have changed.
Here’s the query modified to return the “after” results:

Measure the Cost of Those Queries After Config Change

WITH XMLNAMESPACES (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
SELECT 
	sql_text.[text] as sqltext,
	qp.query_plan,
	xml_values.subtree_cost as estimated_query_cost_in_query_bucks,
	qs.last_dop,
	CAST( qs.total_worker_time / (qs.execution_count + 0.0) as money ) as average_query_cpu_in_microseconds,
	qs.total_worker_time,
	qs.execution_count,
	qs.query_hash,
	qs.query_plan_hash,
	qs.plan_handle,
	qs.sql_handle	
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
CROSS APPLY sys.dm_exec_query_plan (qs.plan_handle) qp
CROSS APPLY 
	(
		SELECT SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
		(CASE 
			WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),st.[text])) * 2
			ELSE qs.statement_end_offset + 2
			END - qs.statement_start_offset) / 2)
	) as sql_text([text])
OUTER APPLY 
	( 
		SELECT 
			n.c.value('@QueryHash', 'NVARCHAR(30)')  as query_hash,
			n.c.value('@StatementSubTreeCost', 'FLOAT')  as subtree_cost
		FROM qp.query_plan.nodes('//StmtSimple') as n(c)
	) xml_values
WHERE qs.query_hash in ( /* put the list of sql_handles you saw from before the config change here */ )
AND sys.fn_varbintohexstr(qs.query_hash) = xml_values.query_hash
ORDER BY xml_values.subtree_cost
OPTION (RECOMPILE);

What I Found

August 9, 2021

Find Procedures That Use SELECT *

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:00 pm

I have trouble with procedures that use SELECT *. They are often not “Blue-Green safe“. In other words, if a procedure has a query that uses SELECT * then I can’t change the underlying tables can’t change without causing some tricky deployment issues. (The same is not true for ad hoc queries from the application).

I also have a lot of procedures to look at (about 5000) and I’d like to find the procedures that use SELECT *.
I want to maybe ignore SELECT * when selecting from a subquery with a well-defined column list.
I also want to maybe include related queries like OUTPUT inserted.*.

The Plan

  1. So I’m going to make a schema-only copy of the database to work with.
  2. I’m going to add a new dummy-column to every single table.
  3. I’m going to use sys.dm_exec_describe_first_result_set_for_object to look for any of the new columns I created

Any of my new columns that show up, were selected with SELECT *.

The Script

use master;
DROP DATABASE IF EXISTS search_for_select_star;
DBCC CLONEDATABASE (the_name_of_the_database_you_want_to_analyze, search_for_select_star);
ALTER DATABASE search_for_select_star SET READ_WRITE;
GO
 
use search_for_select_star;
 
DECLARE @SQL NVARCHAR(MAX);
SELECT 
	@SQL = STRING_AGG(
		CAST(
			'ALTER TABLE ' + 
			QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + 
			'.' + 
			QUOTENAME(OBJECT_NAME(object_id)) + 
			' ADD NewDummyColumn BIT NULL' AS NVARCHAR(MAX)),
		N';')
FROM 
	sys.tables;
 
exec sp_executesql @SQL;
 
SELECT 
	SCHEMA_NAME(p.schema_id) + '.' + p.name AS procedure_name, 
	r.column_ordinal,
	r.name
FROM 
	sys.procedures p
CROSS APPLY 
	sys.dm_exec_describe_first_result_set_for_object(p.object_id, NULL) r
WHERE 
	r.name = 'NewDummyColumn'
ORDER BY 
	p.schema_id, p.name;
 
use master;
DROP DATABASE IF EXISTS search_for_select_star;

Update

Tom from StraightforwardSQL pointed out a nifty feature that Microsoft has already implemented.

Yes it does! You can use it like this:

select distinct SCHEMA_NAME(p.schema_id) + '.' + p.name AS procedure_name
from sys.procedures p
cross apply sys.dm_sql_referenced_entities(
	object_schema_name(object_id) + '.' + object_name(object_id), default) re
where re.is_select_all = 1

Comparing the two, I noticed that my query – the one that uses dm_exec_describe_first_result_set_for_object – has some drawbacks. Maybe the SELECT * isn’t actually included in the first result set, but some subsequent result set. Or maybe the result set couldn’t be described for one of these various reasons

On the other hand, I noticed that dm_sql_referenced_entities has a couple drawbacks itself. It doesn’t seem to capture select statements that use `OUTPUT INSERTED.*` for example.

In practice though, I found the query that Tom suggested works a bit better. In the product I work most closely with, dm_sql_referenced_entities only missed 3 procedures that dm_exec_describe_first_result_set_for_object caught. But dm_exec_describe_first_result_set_for_object missed 49 procedures that dm_sql_referenced_entities caught!

January 26, 2021

Avoid This Pitfall When Using sp_getapplock

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 11:10 am

Takeaway: For most use cases, using sp_releaseapplock is unnecessary. Especially when using @LockOwner = 'Transaction (which is the default).

The procedure sp_getapplock is a system stored procedure that can be helpful when developing SQL for concurrency. It takes a lock on an imaginary resource and it can be used to avoid race conditions.

But I don’t use sp_getapplock a lot. I almost always depend on SQL Server’s normal locking of resources (like tables, indexes, rows etc…). But I might consider it for complicated situations (like managing sort order in a hierarchy using a table with many different indexes).

In that case, I might use it something like this:

BEGIN TRAN
 
exec sp_getapplock 
    @Resource = @LockResourceName, 
    @LockMode = 'Exclusive', 
    @LockOwner = 'Transaction';
 
/* read stuff (e.g. "is time slot available?") */
/* change stuff (e.g. "make reservation") */
 
exec sp_releaseapplock
    @Resource = @LockResourceName, 
    @LockOwner = 'Transaction';
 
COMMIT

But there’s a problem with this pattern, especially when using RCSI. After sp_releaseapplock is called, but before the COMMIT completes, another process running the same code can read the previous state. In the example above, both processes will think a time slot is available and will try to make the same reservation.

What I really want is to release the applock after the commit. But because I specified the lock owner is 'Transaction'. That gets done automatically when the transaction ends! So really what I want is this:

BEGIN TRAN
 
exec sp_getapplock 
    @Resource = @LockResourceName, 
    @LockMode = 'Exclusive', 
    @LockOwner = 'Transaction';
 
/* read stuff (e.g. "is time slot available?") */
/* change stuff (e.g. "make reservation") */
 
COMMIT -- all locks are freed after this commit

January 8, 2021

Collect Wait Stats Regularly in Order To Report On Them Over Time

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:48 pm

I find wait stats so useful that I’ve got Paul Randal’s SQL Server Wait Statistics (or please tell me where it hurts…) bookmarked and I visit it frequently.

But that gives the total amount of waits for each wait type accumulated since the server was started. And that isn’t ideal when I’m troubleshooting trouble that started recently. No worries, Paul also has another fantastic post Capturing wait statistics for a period of time.

Taking that idea further, I can collect data all the time and look at it historically, or just for a baseline. Lot’s of monitoring tools do this already, but here’s what I’ve written:

Mostly I’m creating these scripts for me. I’ve created a version of these a few times now and some reason, I can’t find them each time I need them again!

This stuff can be super useful, especially, if you combine it with a visualization tool (like PowerBI or even Excel).
For example, here’s a chart I made when we were experiencing the XVB_LIST spinlock issues I wrote about not too long ago. Good visualizations can really tell powerful stories.

A visualization of a spinlock graph

I’m talking here about spins and not waits of course, but the idea is the same and I’ve included the spinlock monitoring scripts in the same repo on github.

Also a quick reminder wait stats aren’t everything. Don’t neglect monitoring resources as Greg Gonzales pointed out last year.

October 28, 2020

Detect Excessive Spinlock Contention on SQL Server

Scaling SQL Server High
The beginning of the school year is behind us and what a semester start! 2020 has been tough on many of us and I’m fortunate to work for a company whose services are in such high demand. In fact we’ve seen some scaling challenges like we’ve never seen before. I want to talk about some of them.

Context

As we prepared to face unprecedented demand this year, we began to think about whether bigger is better. Worried about CPU limits, we looked to what AWS had to offer in terms of their instance sizes.

We were already running our largest SQL Servers on r5 instances with 96 logical CPUs. But we decided to evaluate the pricy u instances which have 448 logical CPUs and a huge amount of memory.

Painful Symptoms

Well, bigger is not always better. We discovered that as we increased the load on the u-series servers, there would come a point where all processors would jump to 100% and stayed there. You could say it plateaued (based on the graph, would that be a plateau? A mesa? Or a butte?)

Graph showing cpu plataued at 100%

When that occurred, the number of batch requests that the server could handle dropped significantly. So we saw more CPU use, but less work was getting done.

The high demand kept the CPU at 100% with no relief until the demand decreased. When that happened, the database seemed to recover. Throughput was restored and the database’s metrics became healthy again. During this trouble we looked at everything including the number of spins reported in the sys.dm_os_spinlock_stats dmv.

The spins and backoffs reported seemed extremely high, especially for the category “XVB_LIST”, but we didn’t really have a baseline to tell whether those numbers were problematic. Even after capturing the numbers and visualizing them we saw larger than linear increases as demand increased, but were those increases excessive?

How To Tell For Sure

Chris Adkin has a post Diagnosing Spinlock Problems By Doing The Math. He explains why spinlocks are useful. It doesn’t seem like a while loop that chews up CPU could improve performance, but it actually does when it helps avoid context switches. He gives a formula to help find how much of the total CPU is spent spinning. That percentage can then help decide whether the spinning is excessive.

But I made a tiny tweak to his formula and I wrote a script to have SQL Server do the math:

  • You still have to give the number of CPUs on your server. If you don’t have those numbers handy, you can get them from SQL Server’s log. I include one of Glenn Berry’s diagnostic queries for that.
  • There’s an assumption in Chris’s calculation that one spin consumes one CPU clock cycle. A spin is really cheap (because it can use the test-and-set instruction), but it probably consumes more than one clock cycle. I assume four, but I have no idea what the actual value is.
EXEC sys.xp_readerrorlog 0, 1, N'detected', N'socket';
-- SQL Server detected 2 sockets with 24 cores per socket ...
 
declare @Sockets int = 2;
declare @PhysicalCoresPerSocket int = 24;
declare @TicksPerSpin int = 4;
 
declare @SpinlockSnapshot TABLE ( 
    SpinLockName VARCHAR(100), 
    SpinTotal BIGINT
);
 
INSERT @SpinlockSnapshot ( SpinLockName, SpinTotal )
SELECT name, spins
FROM   sys.dm_os_spinlock_stats
WHERE  spins > 0;
 
DECLARE @Ticks bigint
SELECT @Ticks = cpu_ticks 
FROM sys.dm_os_sys_info
 
WAITFOR DELAY '00:00:10'
 
DECLARE @TotalTicksInInterval BIGINT
DECLARE @CPU_GHz NUMERIC(20, 2);
 
SELECT @TotalTicksInInterval = (cpu_ticks - @Ticks) * @Sockets * @PhysicalCoresPerSocket,
       @CPU_GHz = ( cpu_ticks - @Ticks ) / 10000000000.0
FROM sys.dm_os_sys_info;
 
SELECT ISNULL(Snap.SpinLockName, 'Total') as [Spinlock Name], 
       SUM(Stat.spins - Snap.SpinTotal) as [Spins In Interval],
       @TotalTicksInInterval as [Ticks In Interval],
       @CPU_Ghz as [Measured CPU GHz],
       100.0 * SUM(Stat.spins - Snap.SpinTotal) * @TicksPerSpin / @TotalTicksInInterval as [%]
FROM @SpinlockSnapshot Snap
JOIN sys.dm_os_spinlock_stats Stat
     ON Snap.SpinLockName = Stat.name
GROUP BY ROLLUP (Snap.SpinLockName)
HAVING SUM(Stat.spins - Snap.SpinTotal) > 0
ORDER BY [Spins In Interval] DESC;

This is what I see on a very healthy server (r5.24xlarge). The server was using 14% cpu. And .03% of that is spent spinning (or somewhere in that ballpark).

A screen shot showing an example of results

More Troubleshooting Steps

So what’s going on? What is that XVB_LIST category? Microsoft says “internal use only” But I can guess. Paul Randal talks about the related latch class Versioning Transaction List. It’s an instance-wide list that is used in the implementation of features like Read Committed Snapshot Isolation (RCSI) which we do use.

Microsoft also has a whitepaper on troubleshooting this stuff Diagnose and resolve spinlock contention on SQL Server. They actually give a technique to collect call stacks during spinlock contention in order to try and maybe glean some information about what else is going on. We did that, but we didn’t learn too much. We learned that we use RCSI with lots of concurrent queries. Something we really can’t give up on.

So Then What?

What We Did

Well, we moved away from the u instance with its hundreds of CPUs and we went back to our r5 instance with only (only!) 96 logical CPUs. We’re dealing with the limits imposed by that hardware and accepting that we can’t scale higher using that box. We’re continuing to do our darnedest to move data and activity out of SQL Server and into other solutions like DynamoDb. We’re also trying to partition our databases into different deployments which spreads the load out, but introduces a lot of other challenges.

Basically, we gave up trying to scale higher. If we did want to pursue this further (which we don’t), we’d probably contact Microsoft support to try and address this spinlock contention. We know that these conditions are sufficient (if not necessary) to see the contention we saw:

  • SQL Server 2016 SP2
  • U-series instance from Amazon
  • Highly concurrent and frequent queries (>200K batch requests per second with a good mix of writes and reads on the same tables)
  • RCSI enabled.

Thank you Erin Stellato

We reached out to Erin Stellato to help us through this issue. We did this sometime around the “Painful Symptoms” section above. We had a stressful time troubleshooting all this stuff and I really appreciate Erin guiding us through it. We learned so much.

October 19, 2020

How to Alter User Defined Table Types (Mostly) Online

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 2:41 pm

Last year, Aaron Bertrand tackled the question, How To Alter User Defined Table Types. Aaron points out that “There is no ALTER TYPE, and you can’t drop and re-create a type that is in use”. Aaron’s suggestion was to create a new type and then update all procedure to use the new type.

I think I’ve got a bit of improvement based on sp_rename and sp_refreshmodule. Something that works well with

  • blue-green deployments,
  • both ad-hoc queries and procedures,
  • imperfectly understood schemas, like schemas that may have suffered from a little bit of schema drift.

Example

Say I have… I don’t know, let’s pick an example out of thin air. Say I have a simple table type containing one BIGINT column that I want to make memory optimized:

What I’ve Got

CREATE TYPE dbo.BigIntSet 
  AS TABLE ( 
    Value BIGINT NOT NULL INDEX IX_BigIntSet );

What I Want

CREATE TYPE dbo.BigIntSet 
  AS TABLE ( 
    Value BIGINT NOT NULL INDEX IX_BigIntSet )
  WITH (MEMORY_OPTIMIZED=ON);

I can’t directly ALTER this table type, but I can do this three-card monte trick using sp_rename to put the BigIntSet in its place.

The Migration Script

IF NOT EXISTS (
  SELECT * 
  FROM sys.table_types
  WHERE name = 'BigIntSet' 
  AND is_memory_optimized = 1
)
BEGIN
 
  CREATE TYPE dbo.BigIntSet_MO 
    AS TABLE ( 
      Value bigint NOT NULL 
      INDEX IX_BigIntSet ) 
    WITH (MEMORY_OPTIMIZED=ON);
 
  -- the switcheroo!
  EXEC sp_rename 'dbo.BigIntSet', 'zz_BigIntSet';
  EXEC sp_rename 'dbo.BigIntSet_MO', 'BigIntSet';
 
  --refresh modules
  DECLARE @Refreshmodulescripts TABLE (script nvarchar(max));
 
  INSERT @Refreshmodulescripts (script)
  SELECT 'EXEC sp_refreshsqlmodule ''' + QUOTENAME(referencing_schema_name) + '.' + QUOTENAME(referencing_entity_name) + ''';'
  FROM sys.dm_sql_referencing_entities('dbo.BigIntSet', 'TYPE');
 
  DECLARE @SQL NVARCHAR(MAX) = N'';
  SELECT @SQL = @SQL + script FROM @Refreshmodulescripts;
 
  EXEC sp_executesql @SQL;
 
END

But Is This Online?

Mostly. All queries that are in progress (whether ad-hoc or via procedures), continue to execute with no issues. However, there may be an issue with other queries that begin their execution during this migration.

If someone sends a query that uses the table type in the split second between the two sp_rename statements, then the query may fail.

If someone executes a procedure in the time between the first sp_rename and when sp_executesql gets around to refreshing that procedure, then the procedure may fail.

In practice, even on a busy server, I saw no such errors the few times I’ve tried this method, but of course, that’s no guarantee. In my case, even when refreshing close to 300 modules, this script takes about one second with no issues.

I actually tried adding a transaction around this whole migration script, and I did in fact see issues on a busy server. The schema modification lock that needs to be taken and held on all 300 objects was too much. It caused excessive blocking and I had to abandon that approach. In practice, I avoided trouble by ditching the explicit transaction.

January 28, 2020

What Tables Are Being Written To The Most?

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 10:38 am

You have excessive WRITELOG waits (or HADR_SYNC_COMMIT waits) and among other things, you want to understand where.

Microsoft’s advice Diagnosing Transaction Log Performance Issues and Limits of the Log Manager remains a great resource. They tell you to use perfmon to look at the log bytes flushed/sec counter (in the SQL Server:Databases object) to see which database is being written to so much.

After identifying a database you’re curious about, you may want to drill down further. I wrote about this problem earlier in Tackle WRITELOG Waits Using the Transaction Log and Extended Events. The query I wrote for that post combines results of an extended events session with the transaction log in order to identify which procedures are doing the most writing.

But it’s a tricky kind of script. It takes a while to run on busy systems. There’s a faster way to drill into writes if you switch your focus from which queries are writing so much to which tables are being written to so much. Both methods of drilling down can be helpful, but the table approach is faster and doesn’t require an extended event session and it might be enough to point you in the right direction.

Use This Query

use [specify your databasename here]
 
-- get the latest lsn for current DB
declare @xact_seqno binary(10);
declare @xact_seqno_string_begin varchar(50);
exec sp_replincrementlsn @xact_seqno OUTPUT;
set @xact_seqno_string_begin = '0x' + CONVERT(varchar(50), @xact_seqno, 2);
set @xact_seqno_string_begin = stuff(@xact_seqno_string_begin, 11, 0, ':')
set @xact_seqno_string_begin = stuff(@xact_seqno_string_begin, 20, 0, ':');
 
-- wait a few seconds
waitfor delay '00:00:10'
 
-- get the latest lsn for current DB
declare @xact_seqno_string_end varchar(50);
exec sp_replincrementlsn @xact_seqno OUTPUT;
set @xact_seqno_string_end = '0x' + CONVERT(varchar(50), @xact_seqno, 2);
set @xact_seqno_string_end = stuff(@xact_seqno_string_end, 11, 0, ':')
set @xact_seqno_string_end = stuff(@xact_seqno_string_end, 20, 0, ':');
 
WITH [Log] AS
(
  SELECT Category, 
         SUM([Log Record Length]) as [Log Bytes]
  FROM   fn_dblog(@xact_seqno_string_begin, @xact_seqno_string_end)
  CROSS  APPLY (SELECT ISNULL(AllocUnitName, Operation)) AS C(Category)
  GROUP  BY Category
)
SELECT   Category, 
         [Log Bytes],
         100.0 * [Log Bytes] / SUM([Log Bytes]) OVER () AS [%]
FROM     [Log]
ORDER BY [Log Bytes] DESC;

Results look something like this (Your mileage may vary).
A screenshot of the results

Notes

  • Notice that some space in the transaction log is not actually about writing to tables. I’ve grouped them into their own categories and kept them in the results. For example LOP_BEGIN_XACT records information about the beginning of transactions.
  • I’m using sp_replincrementlsn to find the current last lsn. I could have used log_min_lsn from sys.dm_db_log_stats but that dmv is only available in 2016 SP2 and later.
  • This method is a little more direct measurement of transaction log activity than a similar query that uses sys.dm_db_index_operational_stats

January 20, 2020

Watching SQL Server Stuff From Performance Monitor

Taking a small break from my blogging sabbatical to post one script that I’ve found myself writing from scratch too often.
My hope is that the next time I need this, I’ll look it up here.

The User Settable Counter

Use this to monitor something that’s not already exposed as a performance counter. Like the progress of a custom task or whatever. If you can write a quick query, you can expose it to a counter that can be plotted by Performance Monitor.

Here’s the script (adjust SomeMeasurement and SomeTable to whatever makes sense and adjust the delay interval if 1 second is too short:

declare @deltaMeasurement int = 0;
declare @totalMeasurement int = 0;
 
while (1=1)
begin
 
  select @deltaMeasurement = SomeMeasurement - @totalMeasurement
  from SomeTable;
 
  set @totalMeasurement += @deltaMeasurement;
 
  exec sp_user_counter1 @deltaMeasurement;
  waitfor delay '00:00:01'
end

Monitoring

Now you can monitor “User Counter 1” in the object “SQLServer:User Settable” which will look like this:
Example of monitoring a performance counter using Performance Monitor

Don’t forget to stop the running query when you’re done.

Older Posts »

Powered by WordPress