Michael J. Swart

September 8, 2011

Mythbusting: Concurrent Update/Insert Solutions

Filed under: SQLServerPedia Syndication,Technical Articles — Tags: , , , — Michael J. Swart @ 8:00 am

Update: This is a post on the topic of UPSERT. See my more recent post SQL Server UPSERT Patterns and Antipatterns

I’ve recently come across a large number of methods that people use to avoid concurrency problems when programming an Insert/Update query. Rather than argue with some of them. I set out to test out how valid each method by using experiment! Here’s the myth:

Does Method X really perform Insert/Update queries concurrently, accurately and without errors?

… where X is one of a variety of approaches I’m going to take. And just like the Discovery Channel show Mythbusters, I’m going to call each method/procedure/myth either busted, confirmed or plausible based on the effectiveness of each method.
Jamie Hyneman and Adam Savage of Mythbusters

Actually, feel free to follow along at home (on development servers). Nothing here is really dangerous.

Here’s what my stored procedure should do. The stored procedure should look in a particular table for a given id. If a record is found, a counter field on that record is incremented by one. If the given id is not found, a new record is inserted into that table. This is the common UPSERT scenario.

I want to be able to do this in a busy environment and so the stored procedure has to co-operate and play nicely with other concurrent processes.

The Setup

The set up has two parts. The first part is the table and stored procedure. The stored procedure will change for each method, but here’s the setup script that creates the test database and test table:

/* Setup */
if DB_ID('UpsertTestDatabase') IS NULL
    create database UpsertTestDatabase
go
use UpsertTestDatabase
go
 
if OBJECT_ID('mytable') IS NOT NULL
    drop table mytable;
go
 
create table mytable
(
    id int,
    name nchar(100),
    counter int,
    primary key (id),
    unique (name)
);
go

The second thing I need for my setup is an application that can call a stored procedure many times concurrently and asynchronously. That’s not too hard. Here’s the c-sharp program I came up with: Program.cs. It compiles into a command line program that calls a stored procedure 10,000 times asynchronously as often as it can. It calls the stored procedure 10 times with a single number before moving onto the next number This should generate 1 insert and 9 updates for each record.

Method 1: Vanilla

The straight-forward control stored procedure, it simply looks like this:

/* First shot */
if OBJECT_ID('s_incrementMytable') IS NOT NULL
drop procedure s_incrementMytable;
go
 
create procedure s_incrementMytable(@id int)
as
declare @name nchar(100) = cast(@id as nchar(100))
 
begin transaction
if exists (select 1 from mytable where id = @id)
	update mytable set counter = counter + 1 where id = @id;
else
	insert mytable (id, name, counter) values (@id, @name, 1);
commit
go

It works fine in isolation, but when run concurrently using the application, I get primary key violations on 0.42 percent of all stored procedure calls! Not too bad. The good news is that this was my control scenario and now I’m confident that there is a valid concurrency concern here. And that my test application is working well.

Method 2: Decreased Isolation Level

Just use NOLOCKS on everything and all your concurrency problems are solved right?

if OBJECT_ID('s_incrementMytable') IS NOT NULL
	drop procedure s_incrementMytable;
go
 
create procedure s_incrementMytable(@id int)
as
declare @name nchar(100) = cast(@id as nchar(100))
 
set transaction isolation level read uncommitted
begin transaction
if exists (select 1 from mytable where id = @id)
	update mytable set counter = counter + 1 where id = @id;
else 
	insert mytable (id, name, counter) values (@id, @name, 1);
commit
go

I find out that there are still errors that are no different than method 1. These primary key errors occur on 0.37 percent of my stored procedure calls. NOLOCK = NOHELP in this case.

Method 3: Increased Isolation Level

So let’s try to increase the isolation level. The hope is that the more pessimistic the database is, the more locks will be taken and held as they’re needed preventing these primary key violations:

if OBJECT_ID('s_incrementMytable') IS NOT NULL
	drop procedure s_incrementMytable;
go
 
create procedure s_incrementMytable(@id int)
as
declare @name nchar(100) = cast(@id as nchar(100))
 
set transaction isolation level serializable
begin transaction
if exists (select 1 from mytable where id = @id)
	update mytable set counter = counter + 1 where id = @id;
else 
	insert mytable (id, name, counter) values (@id, @name, 1);
commit
go

Bad news! Something went wrong and while there are no primary key violations, 82% of my queries failed as a deadlock victim. A bit of digging tells me that several processes have gained shared locks and are also trying to convert them into exclusive locks… Deadlocks everywhere

Method 4: Increased Isolation + Fine Tuning Locks

Hmm… What does stackoverflow have to say about high concurrency upsert? A bit of research on Stackoverflow.com lead me to an excellent post by Sam Saffron called Insert or Update Pattern For SQL Server. He describes what I’m trying to do perfectly. The idea is that when the stored procedure first reads from the table, it should grab and hold a lock that is incompatible with other locks of the same type for the duration of the transaction. That way, no shared locks need to be converted to exclusive locks. So I do that with a locking hint.:

if OBJECT_ID('s_incrementMytable') IS NOT NULL
	drop procedure s_incrementMytable;
go
 
create procedure s_incrementMytable(@id int)
as
declare @name nchar(100) = cast(@id as nchar(100))
 
set transaction isolation level serializable
begin transaction
if exists (select 1 from mytable with (updlock) where id = @id)
	update mytable set counter = counter + 1 where id = @id;
else 
	insert mytable (id, name, counter) values (@id, @name, 1);
commit
go

Zero errors! Excellent! The world makes sense. It always pays to understand a thing and develop a plan rather than trial and error.

Method 5: Read Committed Snapshot Isolation

I heard somewhere recently that I could turn on Read Committed Snapshot Isolation. It’s an isolation level where readers don’t block writers and writers don’t block readers by using row versioning (I like to think of it as Oracle mode). I heard I could turn this setting on quickly and most concurrency problems would go away. Well it’s worth a shot:

ALTER DATABASE UpsertTestDatabase
SET ALLOW_SNAPSHOT_ISOLATION ON
 
ALTER DATABASE UpsertTestDatabase
SET READ_COMMITTED_SNAPSHOT ON
go
 
if OBJECT_ID('s_incrementMytable') IS NOT NULL
	drop procedure s_incrementMytable;
go
 
create procedure s_incrementMytable(@id int)
as
declare @name nchar(100) = cast(@id as nchar(100))
 
begin transaction
if exists (select 1 from mytable where id = @id)
	update mytable set counter = counter + 1 where id = @id;
else 
	insert mytable (id, name, counter) values (@id, @name, 1);
commit
go

Ouch! Primary key violations all over the place. Even more than the control! 23% of the stored procedure calls failed with a primary key violation. And by the way, if I try this with Snapshot Isolation, I not only get PK violations, I get errors reporting “Snapshot isolation transaction aborted due to update conflict”. However, combining method 4 with snapshot isolation once again gives no errors. Kudos to Method 4!

Other Methods

Here are some other things to try (but I haven’t):

  • Avoiding concurrency issues by using Service Broker. If it’s feasible, just queue up these messages and apply them one at a time. No fuss.
  • Rewrite the query above as: UPDATE…; IF @@ROWCOUNT = 0 INSERT…; You could try this, but you’ll find this is almost identical with Method 1.

So How Are We Going To Call This One?

So here are the results we have:

Concurrency Method Status Notes
Method 1: Vanilla Busted This was our control. The status quo is not going to cut it here.
Method 2: Decreased Isolation Level Busted NOLOCK = NOHELP in this case
Method 3: Increased Isolation Level Busted Deadlocks! Strict locking that is used with the SERIALIZABLE isolation level doesn’t seem to be enough!
Method 4: Increased Isolation + Fine Tuning Locks Confirmed By holding the proper lock for the duration of the transaction, I've got the holy grail. (Yay for StackOverflow, Sam Saffron and others).
Method 5: Read Committed Snapshot Isolation Busted While RCSI helps with most concurrency issues, it doesn't help in this particular case.
Other Methods: Service Broker Plausible Avoid the issue and apply changes using a queue. While this would work, the architectural changes are pretty daunting
Update! (Sept. 9, 2011) Other Methods: MERGE statement Busted See comments section
Update! (Feb. 23, 2012) Other Methods: MERGE statement + Increased Isolation Confirmed With a huge number of comments suggesting this method (my preferred method), I thought I’d include it here to avoid any further confusion


Don’t try to apply these conclusions blindly to other situations. In another set of circumstances who knows what the results would be. But test for yourself. Like a good friend of mine likes to say: “Just try it!”

So that’s the show. If you have a myth you want busted. Drop me a line, or leave me a message in the comments section. Cheers ’til next time.

Powered by WordPress