Michael J. Swart

March 18, 2022

UPSERT Requires a Unique Index

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 10:11 am

To avoid deadlocks when implementing the upsert pattern, make sure the index on the key column is unique. It’s not sufficient that all the values in that particular column happen to be unique. The index must be defined to be unique, otherwise concurrent queries can still produce deadlocks.

Say I have a table with an index on Id (which is not unique):

CREATE TABLE dbo.UpsertTest(
	Id INT NOT NULL,
	IdString VARCHAR(100) NOT NULL,
	INDEX IX_UpsertTest CLUSTERED (Id)
)

I implement my test UPSERT procedure the way I’m supposed to like this:

CREATE OR ALTER PROCEDURE dbo.s_DoSomething  
AS 
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION 
	DECLARE @Id BIGINT = DATEPART(SECOND, GETDATE());
	DECLARE @IdString VARCHAR(100) = CAST(@Id AS VARCHAR(100)); 
 
	IF EXISTS ( 
		SELECT * 
		FROM dbo.UpsertTest WITH (UPDLOCK) 
		WHERE Id = @Id 
	) 
	BEGIN 
		UPDATE dbo.UpsertTest 
		SET IdString = @IdString 
		WHERE Id = @Id; 
	END 
	ELSE 
	BEGIN 
		INSERT dbo.UpsertTest (Id, IdString) 
		VALUES (@Id, @IdString); 
	END; 
COMMIT

When I exercise this procedure concurrently with many threads it produces deadlocks! I can use extended events and the output from trace flag 1200 to find out what locks are taken and what order.

What Locks Are Taken?

It depends on the result of the IF statement. There are two main scenarios to look at. Either the row exists or it doesn’t.

Scenario A: The Row Does Not Exist (Insert)
These are the locks that are taken:

    For the IF EXISTS statement:

    • Acquire Range S-U lock on resource (ffffffffffff) which represents “infinity”

    For the Update statement:

    • Acquire RangeI-N lock on resource (ffffffffffff)
    • Acquire X lock on resource (66467284bfa8) which represents the newly inserted row

Insert Scenario

Scenario B: The Row Exists (Update)
The locks that are taken are:

    For the IF EXISTS statement:

    • Acquire Range S-U lock on resource (66467284bfa8)

    For the Update statement:

    • Acquire RangeX-X lock on resource (66467284bfa8)
    • Acquire RangeX-X lock on resource (ffffffffffff)

Update Scenario

Scenario C: The Row Does Not Exist, But Another Process Inserts First (Update)
There’s a bonus scenario that begins just like the Insert scenario, but the process is blocked waiting for resource (ffffffffffff). Once it finally acquires the lock, the next locks that are taken look the same as the other Update scenario. The locks that are taken are:

    For the IF EXISTS statement:

    • Wait for Range S-U lock on resource (ffffffffffff)
    • Acquire Range S-U lock on resource (ffffffffffff)
    • Acquire Range S-U lock on resource (66467284bfa8)

    For the Update statement:

    • Acquire RangeX-X lock on resource (66467284bfa8)
    • Acquire RangeX-X lock on resource (ffffffffffff)

Update After Wait Scenario

The Deadlock

And when I look at the deadlock graph, I can see that the two update scenarios (Scenario B and C) are fighting:
Scenario B:

  • Acquire RangeX-X lock on resource (66467284bfa8) during UPDATE
  • Blocked RangeX-X lock on resource (ffffffffffff) during UPDATE

Scenario C:

  • Acquire RangeS-U lock on resource (ffffffffffff) during IF EXISTS
  • Blocked RangeS-U lock on resource (66467284bfa8) during IF EXISTS

Why Isn’t This A Problem With Unique Indexes?

To find out, let’s take a look at one last scenario where the index is unique:
Scenario D: The Row Exists (Update on Unique Index)

    For the IF EXISTS statement:

    • Acquire U lock on resource (66467284bfa8)

    For the Update statement:

    • Acquire X lock on resource (66467284bfa8)

Visually, I can compare scenario B with Scenario D:
Update Two Scenarios

When the index is not unique, SQL Server has to take key-range locks on either side of the row to prevent phantom inserts, but it’s not necessary when the values are guaranteed to be unique! And that makes all the difference. When the index is unique, no lock is required on resource (ffffffffffff). There is no longer any potential for a deadlock.

Solution: Define Indexes As Unique When Possible

Even if the values in a column are unique in practice, you’ll help improve concurrency by defining the index as unique. This tip can be generalized to other deadlocks. Next time you’re troubleshooting a deadlock involving range locks, check to see whether the participating indexes are unique.

This quirk of requiring unique indexes for the UPSERT pattern is not unique to SQL Server, I notice that PostgreSQL requires a unique index when using their “ON CONFLICT … UPDATE” syntax. This is something they chose to do very deliberately.

Other Things I Tried

This post actually comes from a real problem I was presented. It took a while to reproduce and I tried a few things before I settled on making my index unique.

Lock More During IF EXISTS?
Notice that there is only one range lock taken during the IF EXISTS statement, but there are two range needed for the UPDATE statement. Why is only one needed for the EXISTS statement? If extra rows get inserted above the row that was read, that doesn’t change the answer to EXISTS. So it’s technically not a phantom read and so SQL Server doesn’t take that lock.

So what if I changed my IF EXISTS to

IF ( 
	SELECT COUNT(*)
	FROM dbo.UpsertTest WITH (UPDLOCK) 
	WHERE Id = @Id 
) > 0

That IF statement now takes two range locks which is good, but it still gets tripped up with Scenario C and continues to deadlock.

Update Less?
Change the update statement to only update one row using TOP (1)

UPDATE TOP (1) dbo.UpsertTest 
SET IdString = @IdString
WHERE Id = @Id;

During the update statement, this only requires one RangeX-X lock instead of two. And that technique actually works! I was unable to reproduce deadlocks with TOP (1). So it is indeed a candidate solution, but making the index unique is still my preferred method.

February 7, 2022

Five Ways Time Makes Unit Tests Flaky

Filed under: Miscelleaneous SQL,Technical Articles — Michael J. Swart @ 12:21 pm
Five Ways Time Makes Unit Tests Flaky
I explore different sources of test flakiness related to time:

A flaky test is a unit test that sometimes passes and sometimes fails. The causes of these flaky tests are often elusive because they’re not consistently reproducible.

I’ve found that unit tests that deal with dates and times are notorious for being flaky – especially such tests that talk to SQL Server. I want to explore some of the reasons this can happen.

My Setup


All scripts and code samples are available on github.
In the examples I discuss below, I’m using a table defined like this:

CREATE TABLE dbo.MESSAGE_LOG
(
	LogId INT IDENTITY NOT NULL 
		PRIMARY KEY,
	LogMessage NVARCHAR(MAX) NOT NULL,
	CreatedDate DATETIME
		DEFAULT (SYSDATETIME())
)

I also wrote some methods that execute these queries:

AddLogMessage

INSERT dbo.MESSAGE_LOG(LogMessage)
OUTPUT inserted.LogId
VALUES (@Message);

AddLogMessageWithDate
Same method but this allows the application to supply the LastUpdate value

INSERT dbo.MESSAGE_LOG(LogMessage, LastUpdate)
OUTPUT inserted.LogId
VALUES (@Message, @LastUpdate)

UpdateLogMessage

UPDATE dbo.MESSAGE_LOG
SET Message = @Message,
    LastUpdatedTime = SYSDATETIME()
WHERE LogId = @LogId

Sources of Flaky Tests

In no particular order:

Tests Run Too Quick?

The following test checks to see that UpdateMessage updated the LastUpdate column.

[Test]
public void UpdateMessage_DateIsUpdated_1() {
    string message = Guid.NewGuid().ToString();
    int logId = m_data.AddLogMessage( message );
    LogMessageDto? dto = m_data.GetLogMessage( logId );
    DateTime createdDate = dto.LastUpdate;
 
    string newMessage = Guid.NewGuid().ToString();
    m_data.UpdateLogMessage( logId, newMessage );
 
    dto = m_data.GetLogMessage( logId );
    DateTime updatedDate = dto.LastUpdate;
 
    // The following assertion may fail! 
    // updatedDate and createdDate are Equal if the server is fast enough
    Assert.Greater( updatedDate, createdDate ); 
}

The test ran so quickly that updatedDate has the same value as createdDate. This test may fail with this error message:

    Failed UpdateMessage_DateIsUpdated [55 ms]
    Error Message:
    Expected: greater than 2022-02-05 15:18:10.33
    But was: 2022-02-05 15:18:10.33

It’s tempting to to get around this by adding a Thread.Sleep call between the insert and update. I don’t recommend it. That kind of pattern adds up and really lengthens the time it takes to run all tests.

Another solution might involve changing Greater to GreaterOrEqual but then we can’t verify that the value has actually been updated.

Storing dates using a more precise datetime type like DATETIME2 may help avoid more failures, but maybe not all failures.

The Right Way
Ideally we want to set up the test case such that the LastUpdate value is a constant date that’s definitely in the past. I would change this test to use AddLogMessageWithDate instead of AddLogMessage:

    DateTime then = new DateTime(2000, 01, 01);
    int logId = m_data.AddLogMessageWithDate( message, then );

Not All DateTimes Are Created Equal


.Net’s DateTime is different than SQL Server’s DATETIME. Specifically they have different precisions. DateTime values in SQL Server are rounded to increments of .000, .003, or .007 seconds. This means that you can’t store a .Net DateTime value in SQL Server and get it back. This test demonstrates the problem:

[Test]    
public void StoreDate_ReadItBack() {
    // Store date
    string message = Guid.NewGuid().ToString();
    DateTime now = DateTime.Now;
    int logId = m_data.AddLogMessageWithDate( message, now );
 
    // Read it back
    LogMessageDto? dto = m_data.GetLogMessage( logId );
 
    // The following assertion may fail! 
    // SQL Server's DATETIME has a different precision than .Net's DateTime
    Assert.AreEqual( now, dto.LastUpdate );
}

It may fail with:

    Failed StoreDate_ReadItBack [101 ms]
    Error Message:
    Expected: 2022-02-04 15:11:20.4474577
    But was: 2022-02-04 15:11:20.447

The Right Way
Understanding the resolution limitations of SQL Server’s DateTime is important here. A few solutions come to mind:

  • Maybe use a constant value instead of “now”
  • Modify the database columns to use SQL Server’s DATETIME2 which has a better resolution
  • Just fetch “now” from the database. I like this idea. When I use it again later, I’ll go into more detail.

Time Zones (Of Course)


Running integration tests that talk to a database on a separate servercan mean translating server times back and forth between both servers. This leads to another common source of flakiness: time zones. It’s not easy to avoid this kind of issue. Both Azure and AWS try to tackle this by use UTC everywhere.

A flaky test might look like this.

public void UpdateMessage_DateIsUpdated_2() {
    string message = Guid.NewGuid().ToString();
    DateTime now = DateTime.Now;
    int logId = m_data.AddLogMessageWithDate( message, now );
 
    string newMessage = Guid.NewGuid().ToString();
    m_data.UpdateLogMessage( logId, newMessage );
 
    LogMessageDto? dto = m_data.GetLogMessage( logId );
 
    // This next assertion can fail if the database is in a different time zone        
    Assert.GreaterOrEqual( dto.LastUpdate, now );
}

It fails like this:

    Failed UpdateMessage_DateIsUpdated_2 [19 ms]
    Error Message:
    Expected: greater than or equal to 2022-02-05 21:06:54.521464
    But was: 2022-02-05 16:06:54.52

Why is this pattern a source of flaky tests? The success of the test depends on the time zones of the test server and the database server. But even if you control both time zones, this particular example is still vulnerable to clock drift as we’ll see later.

The Right Way
Use a constant time or try fetching “now” from the database.

DateTime now = m_nowProvider.Now();

Here I’m using a method I wrote which simply returns the value of SELECT GETDATE(); from the database.

Clock Drift


Related to time zones is clock drift which again causes errors when you compare dates from two different servers.

No server’s clock is perfect and I like to think of each server’s clock as having its own time zone. Windows tells me that my laptop is set at (UTC -05:00) but with clock drift it’s probably something like (UTC -05:00:01.3). You can work at synchronizing clocks, but unless you’re testing that synchronization, you shouldn’t depend on it in your tests.

Just like in the case with time zones, this test may fail when it compares times from two different clocks:

public void UpdateMessage_DateIsUpdated_3() {
    string message = Guid.NewGuid().ToString();
    DateTime now = DateTime.Now;
    int logId = m_data.AddLogMessageWithDate( message, now );
 
    string newMessage = Guid.NewGuid().ToString();
    m_data.UpdateLogMessage( logId, newMessage );
 
    LogMessageDto? dto = m_data.GetLogMessage( logId );
 
    // This next test can fail if the clocks on the database server is off by a few seconds
    Assert.GreaterOrEqual( dto.LastUpdate, now );
}

The Right Way
Just like before, use a constant value or try fetching “now” from the database.

DateTime now = m_nowProvider.Now();

This way we’re comparing times from only one server’s clock.

Daylight Savings (Of Course)


This next test is flaky because of daylight savings time. It’s not specific to SQL Server but I thought I’d include it because I have been burned by this before:

[Test]    
public void StoreDateInTheFuture() {
    string message = Guid.NewGuid().ToString();
    DateTime inAMonth = DateTime.Now + TimeSpan.FromDays( 30 );        
 
    // CovertTime may fail because "a month from now" may be an invalid DateTime (with daylight savings)
    inAMonth = TimeZoneInfo.ConvertTime( inAMonth, TimeZoneInfo.Local );
    m_data.AddLogMessageWithDate( message, inAMonth );
    Assert.Pass();
}

I saw a test just like this one fail at 2:18 AM on February 9th, 2018. Adding 30 days to that date brought us to 2:18AM which was right in the middle of the hour we were skipping for daylight savings time and that’s what caused the error. This test fails with:

    Failed StoreDateInTheFuture [32 ms]
    Error Message:
    System.ArgumentException : The supplied DateTime represents an invalid time. For example, when the clock is adjusted forward, any time in the period that is skipped is invalid. (Parameter ‘dateTime’)

Summary


Flaky tests come from non-deterministic tests. To quote Martin Fowler, “Few things are more non-deterministic than a call to the system clock”. Try to:

  • Write tests with hard coded dates
  • Avoid comparing dates sourced from two different clocks
  • Consider writing a “NowProvider” (which can be mocked!)
  • Be very deliberate about time zones
  • Be very deliberate about data types (both in C# and SQL Server)

January 19, 2022

Measure the Effect of “Cost Threshold for Parallelism”

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 10:40 am

The configuration setting cost threshold for parallelism has a default value of 5. As a default value, it’s probably too low and should be raised. But what benefit are we hoping for? And how can we measure it?

The database that I work with is a busy OLTP system with lots of very frequent, very inexpensive queries and so I don’t like to see any query that needs to go parallel.

What I’d like to do is raise the configuration cost threshold to something larger and look at the queries that have gone from multi-threaded to single-threaded. I want to see that these queries become cheaper on average. By cheaper I mean consume less cpu. I expect the average duration of these queries to increase.

How do I find these queries? I can look in the cache. The view sys.dm_exec_query_stats can tell me if a query plan is parallel, and I can look into the plans for the estimated cost. In my case, I have relatively few parallel queries. Only about 300 which means the xml parsing piece of this query runs pretty quickly.

Measure the Cost of Parallel Queries

WITH XMLNAMESPACES (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
SELECT 
	sql_text.[text] as sqltext,
	qp.query_plan,
	xml_values.subtree_cost as estimated_query_cost_in_query_bucks,
	qs.last_dop,
	CAST( qs.total_worker_time / (qs.execution_count + 0.0) as money ) as average_query_cpu_in_microseconds,
	qs.total_worker_time,
	qs.execution_count,
	qs.query_hash,
	qs.query_plan_hash,
	qs.plan_handle,
	qs.sql_handle	
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
CROSS APPLY sys.dm_exec_query_plan (qs.plan_handle) qp
CROSS APPLY 
	(
		SELECT SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
		(CASE 
			WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),st.[text])) * 2
			ELSE qs.statement_end_offset + 2
			END - qs.statement_start_offset) / 2)
	) as sql_text([text])
OUTER APPLY 
	( 
		SELECT 
			n.c.value('@QueryHash', 'NVARCHAR(30)')  as query_hash,
			n.c.value('@StatementSubTreeCost', 'FLOAT')  as subtree_cost
		FROM qp.query_plan.nodes('//StmtSimple') as n(c)
	) xml_values
WHERE qs.last_dop > 1
AND sys.fn_varbintohexstr(qs.query_hash) = xml_values.query_hash
AND execution_count > 10
ORDER BY xml_values.subtree_cost
OPTION (RECOMPILE);

What Next?

Keep track of the queries you see whose estimated subtree cost is below the new threshold you’re considering. Especially keep track of the query_hash and the average_query_cpu_in_microseconds.
Then make the change and compare the average_query_cpu_in_microseconds before and after. Remember to use the sql_hash as the key because the plan_hash will have changed.
Here’s the query modified to return the “after” results:

Measure the Cost of Those Queries After Config Change

WITH XMLNAMESPACES (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
SELECT 
	sql_text.[text] as sqltext,
	qp.query_plan,
	xml_values.subtree_cost as estimated_query_cost_in_query_bucks,
	qs.last_dop,
	CAST( qs.total_worker_time / (qs.execution_count + 0.0) as money ) as average_query_cpu_in_microseconds,
	qs.total_worker_time,
	qs.execution_count,
	qs.query_hash,
	qs.query_plan_hash,
	qs.plan_handle,
	qs.sql_handle	
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
CROSS APPLY sys.dm_exec_query_plan (qs.plan_handle) qp
CROSS APPLY 
	(
		SELECT SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
		(CASE 
			WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),st.[text])) * 2
			ELSE qs.statement_end_offset + 2
			END - qs.statement_start_offset) / 2)
	) as sql_text([text])
OUTER APPLY 
	( 
		SELECT 
			n.c.value('@QueryHash', 'NVARCHAR(30)')  as query_hash,
			n.c.value('@StatementSubTreeCost', 'FLOAT')  as subtree_cost
		FROM qp.query_plan.nodes('//StmtSimple') as n(c)
	) xml_values
WHERE qs.query_hash in ( /* put the list of sql_handles you saw from before the config change here */ )
AND sys.fn_varbintohexstr(qs.query_hash) = xml_values.query_hash
ORDER BY xml_values.subtree_cost
OPTION (RECOMPILE);

What I Found

October 1, 2021

A System-Maintained LastModifiedDate Column

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 11:13 am

I like rowversion columns, I like that they’re system-maintained and they provide a unique deterministic way to compare or order changes in a database. But they’re not timestamps (despite the alias).

I also like datetime2 columns which are called LastModifiedDate. They can indicate the date and time that a row was last modified. But I have to take care of maintaining the column myself. I either have to remember to update that column on every update or I have to use something like a trigger with the associated overhead.

But maybe there’s another option.

GENERATED ALWAYS AS ROW START
What if I use the columns that were meant to be used for temporal tables, but leave SYSTEM_VERSIONING off?

CREATE TABLE dbo.Test
(
	Id INT IDENTITY NOT NULL,
	Value VARCHAR(100) NOT NULL,
	LastModifiedDate DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL,
	SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,	
	PERIOD FOR SYSTEM_TIME (LastModifiedDate, SysEndTime),
	CONSTRAINT PK_Test PRIMARY KEY CLUSTERED (Id)
)

It’s system maintained, it’s an actual datetime and I don’t have to worry about the overhead of triggers.
But it’s not a perfect solution:

  • SysEndTime is required but it’s an unused column here. It’s needed to define the period for SYSTEM_TIME and it’s always going to be DATETIME2’s maximum value. That’s why I made that column “hidden”. It’s got an overhead of 8 Bytes per row.
  • The value for LastModifiedDate will be the starting time of the transaction that last modified the row. That might lead to confusing behaviors illustrated by this example. Say that:
    • Transaction A starts
    • Transaction B starts
    • Transaction B modifies a row
    • Transaction A modifies the same row

    After all that, the last modified date will indicate the time that transaction A starts. In fact if I try these shenanigans with system versioning turned on, I get this error message when transaction A tries to modify the same row:

      Msg 13535, Level 16, State 0, Line 16
      Data modification failed on system-versioned table ‘MyDb.dbo.Test’ because transaction time was earlier than period start time for affected records.

Everything is tradeoffs. If you can live with the drawbacks of the system-generated last modified date column, then it might be an option worth considering.

August 9, 2021

Find Procedures That Use SELECT *

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:00 pm

I have trouble with procedures that use SELECT *. They are often not “Blue-Green safe“. In other words, if a procedure has a query that uses SELECT * then I can’t change the underlying tables can’t change without causing some tricky deployment issues. (The same is not true for ad hoc queries from the application).

I also have a lot of procedures to look at (about 5000) and I’d like to find the procedures that use SELECT *.
I want to maybe ignore SELECT * when selecting from a subquery with a well-defined column list.
I also want to maybe include related queries like OUTPUT inserted.*.

The Plan

  1. So I’m going to make a schema-only copy of the database to work with.
  2. I’m going to add a new dummy-column to every single table.
  3. I’m going to use sys.dm_exec_describe_first_result_set_for_object to look for any of the new columns I created

Any of my new columns that show up, were selected with SELECT *.

The Script

use master;
DROP DATABASE IF EXISTS search_for_select_star;
DBCC CLONEDATABASE (the_name_of_the_database_you_want_to_analyze, search_for_select_star);
ALTER DATABASE search_for_select_star SET READ_WRITE;
GO
 
use search_for_select_star;
 
DECLARE @SQL NVARCHAR(MAX);
SELECT 
	@SQL = STRING_AGG(
		CAST(
			'ALTER TABLE ' + 
			QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + 
			'.' + 
			QUOTENAME(OBJECT_NAME(object_id)) + 
			' ADD NewDummyColumn BIT NULL' AS NVARCHAR(MAX)),
		N';')
FROM 
	sys.tables;
 
exec sp_executesql @SQL;
 
SELECT 
	SCHEMA_NAME(p.schema_id) + '.' + p.name AS procedure_name, 
	r.column_ordinal,
	r.name
FROM 
	sys.procedures p
CROSS APPLY 
	sys.dm_exec_describe_first_result_set_for_object(p.object_id, NULL) r
WHERE 
	r.name = 'NewDummyColumn'
ORDER BY 
	p.schema_id, p.name;
 
use master;
DROP DATABASE IF EXISTS search_for_select_star;

Update

Tom from StraightforwardSQL pointed out a nifty feature that Microsoft has already implemented.

Yes it does! You can use it like this:

select distinct SCHEMA_NAME(p.schema_id) + '.' + p.name AS procedure_name
from sys.procedures p
cross apply sys.dm_sql_referenced_entities(
	object_schema_name(object_id) + '.' + object_name(object_id), default) re
where re.is_select_all = 1

Comparing the two, I noticed that my query – the one that uses dm_exec_describe_first_result_set_for_object – has some drawbacks. Maybe the SELECT * isn’t actually included in the first result set, but some subsequent result set. Or maybe the result set couldn’t be described for one of these various reasons

On the other hand, I noticed that dm_sql_referenced_entities has a couple drawbacks itself. It doesn’t seem to capture select statements that use `OUTPUT INSERTED.*` for example.

In practice though, I found the query that Tom suggested works a bit better. In the product I work most closely with, dm_sql_referenced_entities only missed 3 procedures that dm_exec_describe_first_result_set_for_object caught. But dm_exec_describe_first_result_set_for_object missed 49 procedures that dm_sql_referenced_entities caught!

August 4, 2021

What To Avoid If You Want To Use MERGE

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 1:06 pm

Aaron Bertrand has a post called Use Caution with SQL Server’s MERGE Statement. It’s a pretty thorough compilation of all the problems and defects associated with the MERGE statement that folks have reported in the past. But it’s been a few years since that post and in the spirit of giving Microsoft the benefit of the doubt, I revisited each of the issues Aaron brought up.

Some of the items can be dismissed based on circumstances. I noticed that:

  • Some of the issues are fixed in recent versions (2016+).
  • Some of the issues that have been marked as won’t fix have been fixed anyway (the repro script associated with the issue no longer fails).
  • Some of the items are complaints about confusing documentation.
  • Some are complaints are about issues that are not limited to the MERGE statement (e.g. concurrency and constraint checks).

So what about the rest? In what circumstances might I decide to use a MERGE statement? What do I still need to avoid? In 2019 and later, if I’m using MERGE, I want to avoid:

It’s a shorter list than Aaron’s but there’s another gotcha. The same way that some of the items get addressed with time, new issues continue to pop up. For example, temporal tables are a relatively new feature that weren’t a thing when Aaron’s first posted his list. And so I also want to avoid:

If MERGE has trouble with old and new features , then it becomes clear that MERGE is a very complicated beast to implement. It’s not an isolated feature and it multiplies the number of defects that are possible.

Severity of Issues

There’s a large number of issues with a large variety of severity. Some of the issues are minor annoyances, or easily avoidable. Some of them are serious performance issues that are harder to deal with. But a few of the issues can be worse than that! If I ask SQL Server to UPDATE something, and SQL Server responds with (1 row affected) , then it better have affected that row! If it didn’t, then that’s a whole new level of severity.

That leads me to what I think is the worst, unfixed bug. To be safe from it, avoid MERGEs with:

The defect is called Merge statement Delete does not update indexed view in all cases. It’s still active as of SQL Server 2019 CU9 and it leaves your indexed view inconsistent with the underlying tables. For my case, I can’t predict what indexed views would be created in the future, so I would shorten my advice to say avoid MERGEs with:

Conclusion

When I first started writing this post, I thought the gist was going to be “MERGE isn’t so bad if you just avoid these things”. But it still is bad. Aaron’s list is a little out of date but his advice is not. If Aaron updated that post today, the list of defects and issues with MERGE would be different and just as concerning to me as the day he wrote it.

So just to simplify my advice to you. Avoid:

  • MERGE

April 1, 2021

Only UPDATE Rows That Are Changing, But Do It Carefully

Filed under: Miscelleaneous SQL,Technical Articles,Tongue In Cheek — Michael J. Swart @ 12:20 pm

If you update a column to the exact same value as it had before, there’s still work being done.

Quite obediently, SQL Server takes out its eraser, erases the old value, and writes the same value in its place even though nothing changed!

But it feels like a real change. It has consequences for locking and an impact to the transaction log just as if it were a real change.

So that leads to performance optimizations that look like this:

Original Update Statement:

UPDATE Users
SET    DisplayName = @NewDisplayName
WHERE  Id = @Id;

Only Update When Necessary:

UPDATE Users
SET    DisplayName = @NewDisplayName
WHERE  Id = @Id
AND    DisplayName <> @NewDisplayName;

But Take Care!

Be careful of this kind of optimization. For example, you have to double check that DisplayName is not a nullable column (do you know why?). There are other things to worry about too, mostly side effects:

Side Effects

This simple update statement can have loads of side effects that can be hard to see. And the trouble with any side effect, is that other people can place dependencies on them! It happens all the time. Here is a list of just some of the side effects I can think of, I’m sure it’s not exhaustive.

Triggers: Ugh, I dislike triggers at the best of times, so check out any triggers that might exist on the table. In the original UPDATE statement, the row always appears in the INSERTED and DELETED tables, but in the improved version, the row does not necessarily. You have to see if that matters.

RowCount: What if the original update statement was part of a batch that looked like this:

UPDATE Users
SET    DisplayName = @NewDisplayName
WHERE  Id = @Id;
 
IF (@@ROWCOUNT > 0)
    RAISERROR ('Could not find User to update', 16, 1);

At least this side effect has the benefit of not being hidden. It’s located right beside the code that it depends on.

Rowversion: A rowversion value changes every time a row changes. Such a column would get updated in the original UPDATE statement, but not in the improved version. I can think of a number of reasonable of use cases that might depend on a rowversion column. ETLs for example that only care about changed data. So this might actually be an improvement for that ETL, but then again, maybe the number of “changed” rows was the important part and that number is now changing with the improvement. Speaking of ETLs:

Temporal Tables: Yep, the UPDATE statement is a “change” in the table that gets tracked in temporal history.

Change Data Capture, etc…: I haven’t bothered to set up Change Data Capture to check, but I assume that an UPDATE statement that updates a row to the same value is still considered a change. Right or wrong, the performance improvement changes that assumption.

People Depend On Side Effects

When I see people do this, I start to feel grouchy: Someone’s getting in the way of my performance improvement! But it happens. People depend on side effects like these all the time. I’m sure I do. XKCD pokes fun at this with Workflow where he notices that “Every change breaks someone’s workflow”. And now I’m imagining a case where some knucklehead is using the growth of the transaction log as a metric, like “Wow, business is really booming today, 5GB of transaction log growth and it’s not even noon!”

Although these are silly examples, there are of course more legit examples I could probably think of. And so in a well-functioning organization, we can’t unilaterally bust other peoples workflows (as much as we might like to).

January 26, 2021

Avoid This Pitfall When Using sp_getapplock

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 11:10 am

Takeaway: For most use cases, using sp_releaseapplock is unnecessary. Especially when using @LockOwner = 'Transaction (which is the default).

The procedure sp_getapplock is a system stored procedure that can be helpful when developing SQL for concurrency. It takes a lock on an imaginary resource and it can be used to avoid race conditions.

But I don’t use sp_getapplock a lot. I almost always depend on SQL Server’s normal locking of resources (like tables, indexes, rows etc…). But I might consider it for complicated situations (like managing sort order in a hierarchy using a table with many different indexes).

In that case, I might use it something like this:

BEGIN TRAN
 
exec sp_getapplock 
    @Resource = @LockResourceName, 
    @LockMode = 'Exclusive', 
    @LockOwner = 'Transaction';
 
/* read stuff (e.g. "is time slot available?") */
/* change stuff (e.g. "make reservation") */
 
exec sp_releaseapplock
    @Resource = @LockResourceName, 
    @LockOwner = 'Transaction';
 
COMMIT

But there’s a problem with this pattern, especially when using RCSI. After sp_releaseapplock is called, but before the COMMIT completes, another process running the same code can read the previous state. In the example above, both processes will think a time slot is available and will try to make the same reservation.

What I really want is to release the applock after the commit. But because I specified the lock owner is 'Transaction'. That gets done automatically when the transaction ends! So really what I want is this:

BEGIN TRAN
 
exec sp_getapplock 
    @Resource = @LockResourceName, 
    @LockMode = 'Exclusive', 
    @LockOwner = 'Transaction';
 
/* read stuff (e.g. "is time slot available?") */
/* change stuff (e.g. "make reservation") */
 
COMMIT -- all locks are freed after this commit

January 18, 2021

Pushing SQL Server Higher

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication — Michael J. Swart @ 9:39 am

I sometimes get frustrated with SQL Server. It’s a tricky and challenging product to work with and some days I feel like I’ve only scratched the surface.

But sometimes I’m really impressed with what it can do. It really is a good (if expensive) product.

With the increase in online learning, last week we broke a record for a single SQL Server. 254,000 batch requests per second! That’s the first time we’ve pushed it above the quarter million mark.

(The screen shot was from a day earlier, but you get the idea)

I’d like to hear from others that have reached these kinds of numbers in prod to talk about their experiences. Find me on twitter (@MJSwart).

Find Out More About Our Experience

I’ll be presenting to the Ohio North User Group this February 2nd called How High Can We Scale SQL Server? Feel free to sign up!
I have fond memories of presenting to folks in Cleveland. Why they host their SQL Saturday’s in February is still a mystery. And so I’m glad to present virtually this February to keep up my streak.

January 8, 2021

Collect Wait Stats Regularly in Order To Report On Them Over Time

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:48 pm

I find wait stats so useful that I’ve got Paul Randal’s SQL Server Wait Statistics (or please tell me where it hurts…) bookmarked and I visit it frequently.

But that gives the total amount of waits for each wait type accumulated since the server was started. And that isn’t ideal when I’m troubleshooting trouble that started recently. No worries, Paul also has another fantastic post Capturing wait statistics for a period of time.

Taking that idea further, I can collect data all the time and look at it historically, or just for a baseline. Lot’s of monitoring tools do this already, but here’s what I’ve written:

Mostly I’m creating these scripts for me. I’ve created a version of these a few times now and some reason, I can’t find them each time I need them again!

This stuff can be super useful, especially, if you combine it with a visualization tool (like PowerBI or even Excel).
For example, here’s a chart I made when we were experiencing the XVB_LIST spinlock issues I wrote about not too long ago. Good visualizations can really tell powerful stories.

A visualization of a spinlock graph

I’m talking here about spins and not waits of course, but the idea is the same and I’ve included the spinlock monitoring scripts in the same repo on github.

Also a quick reminder wait stats aren’t everything. Don’t neglect monitoring resources as Greg Gonzales pointed out last year.

Older Posts »

Powered by WordPress