Michael J. Swart

May 24, 2017

A Table Of Contents For the Data Industry

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 10:52 am

DDIAA review of Designing Data-Intensive Applications The Big Ideas Behind Reliable, Scalable, and Maintainable Systems by Martin Kleppman, published by O’Reilly.

Martin Kleppmann

In my time exploring the world of data, I often feel intrigued by the unfamiliar. But like most, I’m leery of buzzwords and maybe a little worried that I’m missing out on the thing that we’ll all need to learn to extend our careers beyond the next five years.

Adding to the frustration, I have difficulty evaluating new technologies when the only help I’ve got is a vendor’s brochure, or a blog titled “Why X is way way better than Y”. Balanced, unbiased help is hard to find. And I really value unbiased ideas. Personally, I try not to appear the Microsoft groupie. I value my MVP award, but I also like to stress the “independent” part of that.

I’m also wary of some academic researchers. The theory can sometimes drift too far away from the practical (dba.stackexchange only has 411 results for “normal form” and I suspect many of those are homework questions). Some academics almost seem offended whenever a vendor deviates from relational theory.

That’s why I was so thrilled (and bit relieved) to discover Martin Kleppmann’s Designing Data-Intensive Applications. Martin is an amazing writer who approached his book with a really balanced style. He’s also a researcher with real-world experience that helps him focus on the practical.

Designing Data-Intensive Applications

First let me just say that the book has a really cool wild boar on the cover. The boar reminds me of Porcellino at the University of Waterloo.

Boar

Designing Data-Intensive Applications is a book that covers database systems very comprehensively. He covers both relational systems and distributed systems. He covers data models, fault tolerance strategies and so much more. In fact he covers so many topics that the whole book seems like a table of contents for our data industry.

Here’s the thing. When I read the parts I know, he’s a hundred percent right and that helps me trust him when he talks about the parts that I don’t know about.

Martin talks a lot about distributed systems, both the benefits and drawbacks, and even though you may have no plans to write a Map-Reduce job, you’ll be equipped to talk about it intelligently with those that do. The same goes for other new systems. For example, after reading Martin’s book, you’ll be able to read the spec sheet on Cosmos DB and feel more comfortable reasoning about its benefits (but that’s a post for another day).

Event Sourcing

Martin then goes on to write about event sourcing. Martin is a fan of event sourcing (as are several of my colleagues). In fact I first learned about event sourcing when a friend sent me Martin’s video Turning the Database Inside Out With Apache Samza.
A ridiculously simplified summary is this. Make the transaction log the source of truth and make everything else a materialized view. It allows for some really powerful data solutions and it simplifies some problems you and I have learned to live with. So I wondered whether his chapter on event sourcing would sound like a commercial. Nope, that chapter is still remarkably well balanced.

By the way, when the revolution comes it won’t bother me at all. There’s no longer such thing as a one-size-fits-all data system and the fascinating work involves fitting all the pieces together. I’m working in the right place to explore this brave new world and excited to learn the best way to move from here to there.

Some Other Notes I Made

  • This feels like the RedBook put together by Michael Stonebraker but it deals with more kinds of systems. I hope Martin refreshes this book every few years with new editions as the industry changes.
  • Martin suggests that the C in ACID might have been introduced for the purpose of the acronym. I knew it!
  • Martin calls the CAP theorem unhelpful and explains why. He admits the CAP theorem “encouraged engineers to explore a wider design space” but it is probably better left behind.
  • My wife Leanne doesn’t like fantasy books and she won’t read a book that has a map in the front. My friend Paul won’t read a book without one. Martin is very professional, but his style shows and I love it. He’s got a map at the beginning of every chapter.
  • The quotes at the beginning of each chapter are really well chosen. They come from Douglas Adams, Terry Pratchett and others. But my favorite is from Thomas Aquinas “If the highest aim of a captain were to preserve his ship, he would keep it in port forever.”

Martin writes “The goal of this book is to help you navigate the fast and diverse changing landscapes of technologies for processing and storing data”. I believe he met that goal.

May 5, 2017

Drawing (again) with SQL Server

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Tongue In Cheek — Michael J. Swart @ 10:41 am

You have my permission to skip this post. This one’s just for me. So I’ve been drawing again with SQL Server’s spatial results tab, the first time I posted something was with Botticelli’s Birth of Venus in More images from the Spatial Results Tab.

Why Michael??

Because it’s a stupid challenge and I wanted to see what I could do with it. The SSMS spatial tab is a lousy crummy medium for images. It really is quite terrible and using SSMS to draw imposes restrictions and rules. It’s fun to see what I can do by staying within that framework. It’s something to push against just because it’s challenging. Others do crosswords, This week, I did this.

Why Now?

I realized a couple things lately.

The Colors Seem Dull … But Don’t Have to Be
I used to think the spatial results tab uses lousy colors, pastel and dull. I realized that they’re not dull, they’re just transparent. I can overlap polygons inside a geometry collection to get more solid colors. Here are the top 100 colors without transparency.

Grid

The Colors Seem Arbitrary … But Don’t Have to Be
The palette that SSMS uses is terrible. It’s almost as if the nth color is chosen using something like Color.FromArgb(new Random(n).Next()); Notice that color 6 and 7 (the beige colors on the left side of the grid) are almost indistinguishable from each-other. But I can use that. I can overlap different colors to get the color I need. And I can write a program to pick the best combination of overlaps. Here’s a nice red and blue:

SSB

But black remains difficult.

Curves Are Supported Now
I can use arc segments called CIRCULARSTRING. SVG files mostly use Bézier curves which cannot be translated easily to arc segments.
Here’s a logo that I rebuilt using arcs instead of Bézier curves:
d2l

For some reason, if you begin to use CIRCULARSTRING, then the transparent colors won’t blend with itself (just other colors).
Also arc segments are rendered as several small line segments anyway, so for my purposes, it’s not a super feature.

Polly

One last picture/query of a scarlet macaw. Click on it or any other picture in this post to get the query that generated it.
Parrot

April 18, 2017

Find Your Dark Queries

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 11:36 am

My friend Josh came up with the term “dark queries”. Just like “dark matter”, dark queries can’t be detected directly, but their effect can still be observed. He’s talking about queries that aren’t stored in cache. If your monitoring solution relies heavily on the statistics of cached queries, then you may not be capturing all the activity on your server.

Obi-Wan never told you what happened to those queries

Some of my favorite monitoring solutions rely on the cached queries:

but some queries will fall out of cache or don’t ever make it into cache. Those are the dark queries I’m interested in today. Today let’s look at query recompiles to shed light on some of those dark queries that maybe we’re not measuring.

By the way, if you’re using SQL Server 2016’s query store then this post isn’t for you because Query Store is awesome. Query Store doesn’t rely on the cache. It captures all activity and stores queries separately – Truth in advertising!

High Recompile Rate?

If you work with a high-frequency transactional workload like I do, then you can’t afford the CPU required for frequent recompiles. If you have sustained recompiles larger than a few hundred per second, that’s probably too much. It’s easy to check. Use the performance monitor to take a look at the SQL Re-Compilations/sec counter which is found in SQLServer:SQL Statistics/sec.

Way too many recompiles

Drill Into Recompile Causes
You can drill into this a little further with an extended event session stored to a histogram like this:

CREATE EVENT SESSION Recompile_Histogram ON SERVER 
  ADD EVENT sqlserver.sql_statement_recompile
  ADD TARGET package0.histogram (
      SET filtering_event_name=N'sqlserver.sql_statement_recompile',
          source=N'recompile_cause',
          source_type=(0) );
 
ALTER EVENT SESSION Recompile_Histogram ON SERVER STATE = START;

Query it like this

SELECT sv.subclass_name as recompile_cause, 
       shredded.recompile_count
  FROM sys.dm_xe_session_targets AS xet  
  JOIN sys.dm_xe_sessions AS xe  
       ON (xe.address = xet.event_session_address)  
 CROSS APPLY ( SELECT CAST(xet.target_data as xml) ) as target_data_xml ([xml])
 CROSS APPLY target_data_xml.[xml].nodes('/HistogramTarget/Slot') AS nodes (slot_data)
 CROSS APPLY (
         SELECT nodes.slot_data.value('(value)[1]', 'int') AS recompile_cause,
                nodes.slot_data.value('(@count)[1]', 'int') AS recompile_count
       ) as shredded
  JOIN sys.trace_subclass_values AS sv
       ON shredded.recompile_cause = sv.subclass_value
 WHERE xe.name = 'Recompile_Histogram' 
   AND sv.trace_event_id = 37 -- SP:Recompile

To get results like this:
recompile_reasons

Infrequent Recompiles?

Even if you don’t have frequent recompiles, it’s important to know what’s going on in the server. On your OLTP database, I bet you’re reluctant to let those BI folks run their analytical queries whenever they want. They may try to hide their shenanigans with an OPTION(RECOMPILE) hint. This extended events captures those query recompiles. Be sure to only capture a small sample by having the session run for a small amount of time. Or you can use the histogram above to make sure that the frequency is low.

CREATE EVENT SESSION [DarkQueries] ON SERVER
  ADD EVENT sqlserver.sql_statement_recompile(
    ACTION(sqlserver.database_id,sqlserver.sql_text)
    WHERE ([recompile_cause]=(11))) -- Option (RECOMPILE) Requested
  ADD TARGET package0.event_file(SET filename=N'DarkQueries');
ALTER EVENT SESSION [DarkQueries] ON SERVER STATE = START;
GO

Take a look at the dark queries by executing this:

SELECT DarkQueryData.eventDate,
       DB_NAME(DarkQueryData.database_id) as DatabaseName,
       DarkQueryData.object_type,
       COALESCE(DarkQueryData.sql_text, 
                OBJECT_NAME(DarkQueryData.object_id, DarkQueryData.database_id)) command,
       DarkQueryData.recompile_cause
  FROM sys.fn_xe_file_target_read_file ( 'DarkQueries*xel', null, null, null) event_file_value
 CROSS APPLY ( SELECT CAST(event_file_value.[event_data] as xml) ) event_file_value_xml ([xml])
 CROSS APPLY (
         SELECT event_file_value_xml.[xml].value('(event/@timestamp)[1]', 'datetime') as eventDate,
                event_file_value_xml.[xml].value('(event/action[@name="sql_text"]/value)[1]', 'nvarchar(max)') as sql_text,
                event_file_value_xml.[xml].value('(event/data[@name="object_type"]/text)[1]', 'nvarchar(100)') as object_type,
                event_file_value_xml.[xml].value('(event/data[@name="object_id"]/value)[1]', 'bigint') as object_id,
                event_file_value_xml.[xml].value('(event/data[@name="source_database_id"]/value)[1]', 'bigint') as database_id,
                event_file_value_xml.[xml].value('(event/data[@name="recompile_cause"]/text)[1]', 'nvarchar(100)') as recompile_cause
       ) as DarkQueryData
 ORDER BY eventDate DESC

This gives you results that look something like the following contrived example:
darkqueries

March 24, 2017

SQL vs. NoSQL

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 9:56 am

It’s difficult to say anything meaningful about this topic. I can’t think of another topic that requires the phrase “it depends” more. And it’s true for a number of reasons.

  • You can’t compare them without context.
  • You can’t say one is better than the other. That would be like asking which is better, a dump truck or a Ferrari? It depends on what you need to do (go fast or haul dirt?) and whether you have trained people who can use those machines.

  • SQL and NoSQL aren’t really well-defined.
  • They simply aren’t. In fact, I can’t even really tell you which vehicle represents SQL and which represents NoSQL.

But I’m going to give it a shot anyway.

I’m Presenting “SQL vs. NoSQL” March 28, 2017 (Next Tuesday) in Kitchener

I’m excited to be invited to the next KW Big Data Peer2Peer meetup.
I’ll be giving a presentation called “All about SQL vs. NoSQL: A comparison of relational and non-relational data”. If you’re in town I’d love to see you there!

We’ll explore what things typically characterize SQL:

  • Normalization
  • ACID properties
  • Declarative query language, Query optimizers, write-ahead logs
  • What do you get for free in SQL? What things get awkward?

And of course, by contrast, we’ll talk about non-relational data and other data stores that break those rules.

It should be fun! I hope to see you there.

February 22, 2017

A Program to Find INSERT Statements That Don’t Specify Columns

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 8:00 am

I’ve got a program here that finds SQL in procedures with missing column specifications.

Table Columns Are Ordered

Even though they shouldn’t be.

Unlike mathematical relations, SQL tables have ordered columns, but please don’t depend on it.
In other words, try to treat these tables as the same because it’s super-awkward to turn one into the other:

CREATE TABLE PEOPLE
(
    LastName varchar(200),
    FirstName varchar(200)
)
CREATE TABLE PEOPLE
(
    FirstName varchar(200),
    LastName varchar(200)
)

Don’t Omit Column Specification

And don’t forget to specify the columns in your INSERT statement. No excuses.

All the columns!

You’re depending on the column ordering if you write INSERT statements like this:

INSERT PEOPLE /* no column spec */
VALUES ('Rob', 'Farley'),
       ('Angela', 'Henry'),
       ('Andy', 'Leonard'),
       ('Richard', 'Douglas'),
       ('David', 'Maxwell'),
       ('Aaron', 'Nelson'),
       ('Paul', 'Randal'),
       ('Buck', 'Woody');

We recently got burned by something like this 🙁

Find Missing Column Specifications

Thomas LaRock recently encouraged DBAs to branch out horizontally. In that spirit, don’t be too afraid of the C#. I’ve got a program here that finds procedures with missing column specifications.

  • If for some reason, you don’t care about enforcing this rule for temp tables and table variables, then uncomment the line // visitor.TolerateTempTables = true;
  • It uses ScriptDom which you can get from Microsoft as a nuget package.
  • The performance is terrible in Visual Studio because ScriptDom uses Antlr which uses exceptions for flow control and this leads to lots of “first chance exceptions” which slows down debugging. Outside of Visual Studio, it’s just fine.
using System;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Linq;
using Microsoft.SqlServer.TransactSql.ScriptDom;
 
class Program {
 
    static void Main(string[] args) {
 
        SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder {
            DataSource = ".",
            InitialCatalog = "test_database",
            IntegratedSecurity = true
        };
 
        using (SqlConnection conn = new SqlConnection(builder.ToString())) {
            conn.Open();
            SqlCommand command = new SqlCommand(@"
                SELECT OBJECT_SCHEMA_NAME(object_id) [schema], 
                       OBJECT_NAME(object_id)        [procedure], 
                       OBJECT_DEFINITION(object_id)  [sql]
                  FROM sys.procedures 
                 ORDER BY OBJECT_SCHEMA_NAME(object_id), OBJECT_NAME(object_id) ;", conn);
            SqlDataReader reader = command.ExecuteReader();
            while (reader.Read()) {
                string schema = reader["schema"].ToString();
                string procedure = reader["procedure"].ToString();
                string sql = reader["sql"].ToString();
                if (SqlHasInsertWithoutColumnList(sql)) {
                    Console.WriteLine( $"{schema}.{procedure}" );
                }
            }
        }            
    }
 
    static bool SqlHasInsertWithoutColumnList(string SQL) {
        SQLVisitor visitor = new SQLVisitor();
        // visitor.TolerateTempTables = true;
        TSql130Parser parser = new TSql130Parser(true);
        IList<ParseError> errors;
        var fragment = parser.Parse(new System.IO.StringReader(SQL), out errors);
        fragment.Accept(visitor);
        return visitor.HasInsertWithoutColumnSpecification;
    }
}
 
internal class SQLVisitor : TSqlFragmentVisitor {
    public bool HasInsertWithoutColumnSpecification { get; set; }
    public bool TolerateTempTables { get; set; }
 
    public override void ExplicitVisit(InsertStatement node) {
        if (node.InsertSpecification.Columns.Any())
            return;
 
        var source = node.InsertSpecification.InsertSource as ValuesInsertSource;
        if (source != null && source.IsDefaultValues)
            return;
 
        if (TolerateTempTables) {
            var target = node.InsertSpecification.Target as NamedTableReference;
            if (target != null && !target.SchemaObject.BaseIdentifier.Value.StartsWith("#")) {
                HasInsertWithoutColumnSpecification = true;
            }
        } else {
            HasInsertWithoutColumnSpecification = true;
        }
    }
}

In my environment, I found twelve examples which I’ll be fixing soon.

February 10, 2017

What Does “monitorLoop” mean in the Blocked Process Report

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 9:11 am

Some trivia for your Friday. I recently got a question asking about what the attribute “monitorLoop” is in the blocked-process-report because it’s not really documented.

monitorloop

You won’t find too much explanation about that field in the official documentation but I believe I know what it means.

The blocked process report is closely tied to deadlock detection and it’s generated by the same process as the deadlock monitor. If you remember, the deadlock monitor runs frequently looking for deadlocks (which are just blocking chains in a circle). It runs every couple seconds when there are no deadlocks, and if it detects any, it runs a bit more frequently. Each time it runs it’s called a monitor loop. The monitorLoop is just a number that starts at zero when the server restarts and increments by one each time the monitor runs.

For me, when processing a collection of blocked process reports, it’s useful way to group them together so that if several blocked process reports have the same monitor loop, then they were detected at the same time and can be part of the same blocking chain.

This attribute wasn’t always there. There are some old versions of SQL Server that don’t provide the monitorLoop attribute. This is a reminder that the schema of the blocked process report can change (has changed and will change) without notice.

January 16, 2017

Case study: Troubleshooting Doomed Transactions

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 8:00 am

Another recap of a problem we recently tackled where we had to find out why a doomed transaction was still trying to do work. A doomed transaction is one that is about to roll back. In our case, it was caused by a rare set of circumstances:

If you don’t feel like reading ahead, here are the lessons. Don’t program in T-SQL. Avoid triggers. Extended events are awesome.

Getting Error 3930

Our app was getting an error coming from SQL Server. Error 3930 gives this message:

Msg 3930, Level 16, State 1, Line 14
The current transaction cannot be committed and cannot support operations that
write to the log file. Roll back the transaction.

Second most pessimistic robot in the universe

Google pointed me to this Stackoverflow question where Remus Rusanu talks about doomed transactions. You’re not allowed to do work in a catch block if your transaction is doomed. He then points to more information about uncommittable transactions in the TRY/CATCH docs. According to the TRY/CATCH docs, you can get this kind of error when XACT_ABORT is turned ON. This is something we do and I wrote about why in Don’t Abandon Your Transactions.

So there’s a CATCH block catching some error in a TRY block. Now I wonder what that error is. The SqlException that got raised to the client only had the 3930 error above. It had nothing about the original error. I did have the procedure name though. I can look there.

Complicated Procedure

So I took a look at the procedure getting called. I saw that it was calling another procedure. And that procedure called other procedures. It was a big complicated mess. The calling tree looked like this:

Something Complicated

Something Complicated

So the procedure was complicated and it used explicit transactions, but I couldn’t find any TRY/CATCH blocks anywhere! What I needed was a stack trace, but for T-SQL. People don’t talk about T-SQL stack traces very often. Probably because they don’t program like this in T-SQL. We can’t get a T-SQL stack trace from the SQLException (the error given to the client), so we have to get it from the server.

More Info Needed From the Server

So luckily, extended events do have T-SQL stack traces. I wanted to look at those stack traces whenever there was an error on the server. My session looked something like this:

CREATE EVENT SESSION [errors] ON SERVER 
ADD EVENT sqlserver.error_reported(
    ACTION(sqlserver.sql_text,sqlserver.tsql_stack))
ADD TARGET package0.event_file(SET filename=N'errors')
GO

Then I waited

Eventually It Failed Again

Woohoo!
First I looked for the error. The one where error_code = 3930. I did it using Management Studio to view the session’s data.
Then I looked at the error immediately before it. The one where is_intercepted = true. That looks something like this

XEventResults

The T-SQL stack trace for that error is formatted as xml

<frames>
  <frame level="1" handle="0x03001C021AD32B677F977801C8A6000001000000000000000000000000000000000000000000000000000000" line="17" offsetStart="634" offsetEnd="808" />
  <frame level="2" handle="0x03001C02A9D0D86D90977801C8A6000000000000000000000000000000000000000000000000000000000000" line="8" offsetStart="342" offsetEnd="582" />
  <frame level="3" handle="0x03001C02202218770CA27801C8A6000001000000000000000000000000000000000000000000000000000000" line="56" offsetStart="2822" offsetEnd="3218" />
</frames>

It doesn’t look too useful at first glance but you can get the actual SQL query using sys.dm_exec_sql_text.

select * from sys.dm_exec_sql_text(0x03001C021AD32B677F977801C8A6000001000000000000000000000000000000000000000000000000000000)

The Original Error

Here’s what I found. The original error was a PK violation in a procedure using the Just Do It (JDI) pattern. It’s a technique which tackles upsert concurrency problems by inserting a row and catching and suppressing any PK errors that might be thrown.
So that’s where the TRY/CATCH block was.

Also, the stack trace told me that the procedure was called from a trigger. So my complicated procedure was actually more complicated than I thought. It actually looked like this

Something More Complicated

Something More Complicated


That’s why i didn’t see it before. I hate the way triggers can hide complexity. They’re sneaky.

The CATCH block catches a real error and the XACT_ABORT setting dooms the transaction. Because I didn’t do anything with the error, the transaction was allowed to continue. It was actually some query in s_ProcM that attempted to do work inside the doomed transaction.

To fix, we adjusted the text of the query to be concurrent without using TRY/CATCH for flow control. For the extra curious, we used method 4 from Mythbusting: Concurrent Update/Insert Solutions.

Lessons

  • Don’t use the Just Do It (JDI) upsert pattern inside triggers
  • In fact don’t use TRY/CATCH for flow control
  • And also don’t use triggers
  • In fact don’t program inside SQL Server if you can help it
  • Oh, and Extended Events include a T-SQL stack trace if God forbid you need it

December 22, 2016

That Time We Fixed Prod Without Admin Credentials

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 9:00 am

Merry Christmas readers! It’s story time. This is about a problem I encountered a few weeks ago. We were looking at a production site using sp_whoisactive and we noticed a lot of blocking on one particular procedure. I’m going to explain how we tackled it.

In this case, I think it’s interesting that we were able to mitigate the problem without requiring sysadmin access.

The Symptoms

Using sp_whoisactive and other tools, we noticed several symptoms.

  • SQLException timeout errors were reported by the app when calling one procedure in particular.
  • Many sessions were executing that procedure concurrently. Or at least they were attempting to.
  • There was excessive blocking and the lead blocker was running the same procedure.
  • The lead blocker had been running the longest (about 29 seconds)
  • The blocking was caused by processes waiting on Sch-M locks for a table used by that query

Here’s what was going on:
SQL Server was struggling to compile the procedure in time and the application wouldn’t let it catch its breath. The query optimizer was attempting to create statistics automatically that it needed for optimizing the query, but after thirty seconds, the application got impatient and cancelled the query.

So the compilation of the procedure was cancelled and this caused two things to happen. First, the creation of the statistics was cancelled. Second, the next session in line was allowed to run. But the problem was that the next session had already spent 28 seconds blocked by the first session and only had two seconds to try to compile a query before getting cancelled itself.

The frequent calls to the procedure meant that nobody had time to compile this query. And we were stuck in an endless cycle of sessions that wanted to compile a procedure, but could never get enough time to do it.

Why was SQL Server taking so long to compile anyway?
After a bunch of digging, we found out that a SQL Server bug was biting us. This bug involved

  • SQL Server 2014
  • Trace flag 2389 and 2390
  • Filtered Indexes on very large base tables

Kind of a perfect storm of factors that exposed a SQL Server quirk that caused long compilation times, timeouts and pain.

What We Did About It

Well, in this case, I think that the traceflag 2389, 2390 kind of outlived its usefulness (the cure is worse than the disease and all that). So the main fix for this problem is to get rid of those traceflags. But it would be some time before we could get that rolled out.

So for the short term, we worked at getting that procedure compiled and into SQL Server’s cache.

We called the procedure ourselves in Management Studio. Our call waited about thirty seconds before it got its turn to run. Then it spent a little while to compile and run the procedure. Presto! The plan is in the cache now! And everything’s all better right? Nope. Not quite. The timeouts continued.

If you’ve read Erland Sommarskog’s Slow in the Application, Fast in SSMS you may have guessed what’s going on. When we executed the procedure in SSMS, it was using different settings. So the query plan we compiled couldn’t be reused by the application. Remember, all settings (including ARITHABORT) need to match before cached plans can be reused by different sessions. We turned ARITHABORT off in SSMS and called the procedure again.

After a minute, the query completed and all blocking immediately stopped. Whew! The patient was stable.

The whole experience was a pain. And an outage is an outage. Though the count of the snags for the year had increased …

the Grinch had warded off downtime at least

December 1, 2016

Learn About SQL Server in the Waterloo Region

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 9:34 am

Speaking

photo:Hope Foley


I want to promote a couple free talks coming up here in Kitchener Ontario.

Drawing Conclusions, December 12, 2016

At D2L, we created a new free monthly meetup called D2L Night School that helps us give back to the local tech community. This is how we describe it on the site:

Since 1999, D2L has been growing in Kitchener while transforming the way the world learns. We don’t know it all, but along the way we’ve learned some things about technology and management that we want to share with the wider community. We’re excited to launch D2L Night School to share what we’ve learned, network with other innovators and collectively build a stronger tech community in KW.

On December 12, I’ll be giving my Drawing Conclusions talk, a fun and sometimes irreverent look at the SQL Server and Data world through illustrations.

Register here. Then come drink beer, socialize and talk tech. It should be fun.

High Concurrency in SQL Server, January 25, 2017

Earlier this year, the local .Net User Group CTTNUG asked members what topics they’d like to learn more about. It turns out that SQL Server was high on that list and so Ken Cenerelli reached out to me. And so I’ll be talking about SQL Server concurrency, a topic I care a lot about.

Concurrency is a critical part to scalability. Come by and learn how to tackle common enemies of concurrency. Get this right and you’ll have one less problem to worry about. And one less problem lets you get back to more important things.

Register at the meetup site. I’ll see you there in January.

November 22, 2016

I’m Sharing Some Cool Art Prints

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 8:00 am

When I was younger, I put posters on my bedroom wall. My favorite was of the Beatles. I don’t decorate my bedroom with posters any more, but I do decorate my office. And I suspect a lot of you do too.

I can help you with that! I’ve opened an Etsy store where I’m selling art prints.

I’ve started doing drawings (in my own style) at a resolution that is print-worthy. These prints use card stock and are worthy of framing. But if you would rather tack them to a wall with pushpins, that’s fine too 🙂 .

Michael J Swart @ Etsy

I’m starting with these two prints:

Her Highnessness

This is Carrie Fisher as Princess Leia of course. It’s the end of Empire Strikes Back. She’s thinking about Han and Luke. Leia isn’t in a panic. She’s worried and concerned but not anxious. I drew this as if she’s on cloud city at sundown.

Fun fact. This scene doesn’t actually exist, because our heroes don’t get to catch their breath until they’re away from the planet, but I like the red.

Leia

The Duke

I’m a relatively new fan of westerns and of John Wayne in particular. But I know many people love him because they grew up watching him with their family.

This is the Duke as Rooster Cogburn in True Grit (1969). In the story, Rooster is a real bad ass which is perfect for John Wayne. This picture is from a scene where Rooster opens up about his past. He’s sitting at a campfire talking with Maddie but instead of being tough on the outside and soft in the middle, we see that Rooster is tough as nails right through “She said, ‘Goodbye, Reuben, a love for decency does not abide in you.'”

Duke

So order some prints to spruce up your office!

Older Posts »

Powered by WordPress