Michael J. Swart

May 15, 2020

Cross Database Transactions on One Server

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 11:03 am

So check out this code, what’s going on here?

begin transaction
 
insert d1.dbo.T1 values (1);
insert d2.dbo.T1 values (1);
 
commit

The transaction is touching two different databases. So it makes sense that the two actions should be atomic and durable together using the one single transaction.

However, databases implement durability and atomicity using their own transaction log. Each transaction log takes care of its own database. So from another point of view, it makes sense that these are two separate transactions.

Which is it? Two transaction or one transaction?

Two Vs. One

It’s One Transaction (Mostly)

Microsoft’s docs are pretty clear (Thanks Mladen Prajdic for pointing me to it). Distributed Transactions (Database Engine) says:

A transaction within a single instance of the Database Engine that spans two or more databases is actually a distributed transaction. The instance manages the distributed transaction internally; to the user, it operates as a local transaction.

I can actually see that happening with this demo script:

use master
if exists (select * from sys.databases where name = 'D1')
begin
    alter database D1 set single_user with rollback immediate;
    drop database D1;
end
go
 
if exists (select * from sys.databases where name = 'D2')
begin
    alter database D2 set single_user with rollback immediate;
    drop database D2;
end
go
 
create database d1;
go
 
create database d2;
go
 
create table d1.dbo.T1 (id int);
create table d2.dbo.T1 (id int);
go
 
use d1;
 
CHECKPOINT;
go
 
begin transaction
 
insert d1.dbo.T1 values (1);
insert d2.dbo.T1 values (1);
 
commit
 
select [Transaction ID], [Transaction Name], Operation, Context, [Description]
from fn_dblog(null, null);

That shows a piece of what’s going on in the transaction log like this:

Transaction log output

If you’re familiar with fn_dblog output (or even if you’re not), notice that when a transaction touches two databases, there are extra entries in the transaction log. D1 has LOP_PREP_XACT and LOP_FORGET_XACT and D2 only has LOP_PREP_XACT. Grahaeme Ross wrote a lot more about what this means in his article Understanding Cross-Database Transactions in SQL Server

Well that’s good. I can count on that can’t I?

Except When …

You Break Atomicity On Purpose
Well, they are two databases after all. If you want to restore one database to a point in time before the transaction occurred but not the other, I’m not going to stop you.

Availability Groups
But there’s another wrench to throw in with Availability Groups. Again Microsoft’s docs are pretty clear on this (Thanks Brent for pointing me to them). In Transactions – availability groups and database mirroring they point out this kind of thing is pretty new:

In SQL Server 2016 SP1 and before, cross-database transactions within the same SQL Server instance are not supported for availability groups.

There’s support in newer versions, but the availability group had to have been created with WITH DTC_SUPPORT = PER_DB. There’s no altering the availability group after it’s been created.

It’s also interesting that availability groups’ older brother, database mirroring is absolutely not supported. Microsoft says so several times and wants you to know that if you try and you mess up, it’s on you:

… any issues arising from the improper use of distributed transactions are not supported.

Long story short:

  • Cross DB Transactions in the same server are supported with Availability Groups in SQL Server 2017 and later
  • Cross DB Transactions are not supported with mirrored databases at all

January 28, 2020

What Tables Are Being Written To The Most?

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 10:38 am

You have excessive WRITELOG waits (or HADR_SYNC_COMMIT waits) and among other things, you want to understand where.

Microsoft’s advice Diagnosing Transaction Log Performance Issues and Limits of the Log Manager remains a great resource. They tell you to use perfmon to look at the log bytes flushed/sec counter (in the SQL Server:Databases object) to see which database is being written to so much.

After identifying a database you’re curious about, you may want to drill down further. I wrote about this problem earlier in Tackle WRITELOG Waits Using the Transaction Log and Extended Events. The query I wrote for that post combines results of an extended events session with the transaction log in order to identify which procedures are doing the most writing.

But it’s a tricky kind of script. It takes a while to run on busy systems. There’s a faster way to drill into writes if you switch your focus from which queries are writing so much to which tables are being written to so much. Both methods of drilling down can be helpful, but the table approach is faster and doesn’t require an extended event session and it might be enough to point you in the right direction.

Use This Query

use [specify your databasename here]
 
-- get the latest lsn for current DB
declare @xact_seqno binary(10);
declare @xact_seqno_string_begin varchar(50);
exec sp_replincrementlsn @xact_seqno OUTPUT;
set @xact_seqno_string_begin = '0x' + CONVERT(varchar(50), @xact_seqno, 2);
set @xact_seqno_string_begin = stuff(@xact_seqno_string_begin, 11, 0, ':')
set @xact_seqno_string_begin = stuff(@xact_seqno_string_begin, 20, 0, ':');
 
-- wait a few seconds
waitfor delay '00:00:10'
 
-- get the latest lsn for current DB
declare @xact_seqno_string_end varchar(50);
exec sp_replincrementlsn @xact_seqno OUTPUT;
set @xact_seqno_string_end = '0x' + CONVERT(varchar(50), @xact_seqno, 2);
set @xact_seqno_string_end = stuff(@xact_seqno_string_end, 11, 0, ':')
set @xact_seqno_string_end = stuff(@xact_seqno_string_end, 20, 0, ':');
 
WITH [Log] AS
(
  SELECT Category, 
         SUM([Log Record Length]) as [Log Bytes]
  FROM   fn_dblog(@xact_seqno_string_begin, @xact_seqno_string_end)
  CROSS  APPLY (SELECT ISNULL(AllocUnitName, Operation)) AS C(Category)
  GROUP  BY Category
)
SELECT   Category, 
         [Log Bytes],
         100.0 * [Log Bytes] / SUM([Log Bytes]) OVER () AS [%]
FROM     [Log]
ORDER BY [Log Bytes] DESC;

Results look something like this (Your mileage may vary).
A screenshot of the results

Notes

  • Notice that some space in the transaction log is not actually about writing to tables. I’ve grouped them into their own categories and kept them in the results. For example LOP_BEGIN_XACT records information about the beginning of transactions.
  • I’m using sp_replincrementlsn to find the current last lsn. I could have used log_min_lsn from sys.dm_db_log_stats but that dmv is only available in 2016 SP2 and later.
  • This method is a little more direct measurement of transaction log activity than a similar query that uses sys.dm_db_index_operational_stats

January 20, 2020

Watching SQL Server Stuff From Performance Monitor

Taking a small break from my blogging sabbatical to post one script that I’ve found myself writing from scratch too often.
My hope is that the next time I need this, I’ll look it up here.

The User Settable Counter

Use this to monitor something that’s not already exposed as a performance counter. Like the progress of a custom task or whatever. If you can write a quick query, you can expose it to a counter that can be plotted by Performance Monitor.

Here’s the script (adjust SomeMeasurement and SomeTable to whatever makes sense and adjust the delay interval if 1 second is too short:

declare @deltaMeasurement int = 0;
declare @totalMeasurement int = 0;
 
while (1=1)
begin
 
  select @deltaMeasurement = SomeMeasurement - @totalMeasurement
  from SomeTable;
 
  set @totalMeasurement += @deltaMeasurement;
 
  exec sp_user_counter1 @deltaMeasurement;
  waitfor delay '00:00:01'
end

Monitoring

Now you can monitor “User Counter 1” in the object “SQLServer:User Settable” which will look like this:
Example of monitoring a performance counter using Performance Monitor

Don’t forget to stop the running query when you’re done.

April 3, 2019

Finding Tables with Few Dependencies

Filed under: SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 10:00 am

A couple weeks ago, I wrote about how to find lonely tables in Sql Server. This is a follow up to that post. I’m now going to talk about small sets of tables that are joined to eachother, but no-one else.

It’s Not Just Me
It seems everyone’s talking about this.

So as I was writing this post and code I noticed an amazing coincidence. I saw the same ideas I was writing about being discussed on twitter by Kelly Sommers, Ben Johnson and others.

They discuss Uber’s microservice graph. When visualized, it’s a big mish-mash of dependencies. Kelly points out how hard it is to reason about and Ben points to a small decoupled piece of the system that he wants to work on.

Me too Ben! And I think that’s the value of that visualization. It can demonstrate to others how tangled your system is. It can also identify small components that are not connected to the main mess. When I tie it to my last post and consider this idea in the database world, I can expand my idea of lonely tables to small sets of tables that are never joined to other tables.

I want to find them because these tables are also good candidates for extraction but how do I find them? I start by visualizing tables and their joins.

Visualizing Table Joins

I started by looking for existing visualizations. I didn’t find exactly what I wanted so I coded my own visualization (with the help of the d3 library). It’s always fun to code your own physics engine.

Here’s what I found

A monolith with some smaller isolated satellites

An example that might be good to extract

That ball of mush in the middle is hard to look at, but the smaller disconnected bits aren’t! Just like Ben, I want to work on those smaller pieces too! And just like the lonely tables we looked at last week, these small isolated components are also good candidates for extracting from SQL Server.

Try It Yourself

I’ve made this visualization available here:

https://michaeljswart.com/show_graph/show_graph.html

There’s a query at the end of this post. When you run it, you’ll get pairs of table names and when you paste it into the Show Graph page, you’ll see a visualization of your database.

(This is all client-side code, I don’t collect any data).

The Query

use [your database name goes here];
 
select
    qs.query_hash,
    qs.plan_handle,
    cast(null as xml) as query_plan
into #myplans
from sys.dm_exec_query_stats qs
cross apply sys.dm_exec_plan_attributes(qs.plan_handle) pa
where pa.attribute = 'dbid'
and pa.value = db_id();
 
with duplicate_queries as
(
    select ROW_NUMBER() over (partition by query_hash order by (select 1)) r
    from #myplans
)
delete duplicate_queries
where r > 1;
 
update #myplans
set query_plan = qp.query_plan
from #myplans mp
cross apply sys.dm_exec_query_plan(mp.plan_handle) qp
 
;WITH XMLNAMESPACES (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan'),
mycte as
(
    select q.query_hash,
           obj.value('(@Schema)[1]', 'sysname') AS schema_name,
           obj.value('(@Table)[1]', 'sysname') AS table_name
    from #myplans q
    cross apply q.query_plan.nodes('/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple') as nodes(stmt)
    CROSS APPLY stmt.nodes('.//IndexScan/Object') AS index_object(obj)
)
select query_hash, schema_name, table_name
into #myExecutions
from mycte
where schema_name is not null
and object_id(schema_name + '.' + table_name) in (select object_id from sys.tables)
group by query_hash, schema_name, table_name;
 
select DISTINCT A.table_name as first_table,
       B.table_name as second_table
from #myExecutions A
join #myExecutions B
on A.query_hash = B.query_hash
where A.table_name < B.table_name;

March 12, 2019

Lonely Tables in SQL Server

Filed under: SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:00 pm

Takeaway: I provide a script that looks at the procedure cache and reports tables that are never joined to other tables.

Recently, I’ve been working hard to reduce our use of SQL Server as much as possible. In other words, I’ve been doing some spring cleaning. I pick up a table in my hands and I look at it. If it doesn’t spark joy then I drop it.

If only it were that easy. That’s not quite the process I’m using. The specific goals I’m chasing are about reducing cost. I’m moving data to cheaper data stores when it makes sense.

So let’s get tidying. But where do I start?

Getting rid of SQL Server tables should accomplish a couple things. First, it should “move the needle”. If my goal is cost, then the tables I choose to remove should reduce my hardware or licensing costs in a tangible way. The second thing is that dropping the table is achievable without 10 years of effort. So I want to focus on “achievability” for a bit.

Achievable

What’s achievable? I want to identify tables to extract from the database that won’t take years. Large monolithic systems can have a lot of dependencies to unravel.

So what tables in the database have the least dependencies? How do I tell without a trustworthy data model? Is it the ones with the fewest foreign keys (in or out)? Maybe, but foreign keys aren’t always defined properly or they can be missing all together.

My thought is that if two tables are joined together in some query, then they’re related or connected in some fashion. So that’s my idea. I can look at the procedure cache of a database in production to see where the connections are. And when I know that, I can figure out what tables are not connected.

Lonely Tables

This script gives me set of tables that aren’t joined to any other table in any query in cache

use [your db name here];
 
SELECT qs.query_hash,
       qs.plan_handle,
       cast(null as xml) as query_plan
  INTO #myplans
  FROM sys.dm_exec_query_stats qs
 CROSS APPLY sys.dm_exec_plan_attributes(qs.plan_handle) pa
 WHERE pa.attribute = 'dbid'
   AND pa.value = db_id();
 
WITH duplicate_queries AS
(
  SELECT ROW_NUMBER() OVER (PARTITION BY query_hash ORDER BY (SELECT 1)) n
  FROM #myplans
)
DELETE duplicate_queries
 WHERE n > 1;
 
UPDATE #myplans
   SET query_plan = qp.query_plan
  FROM #myplans mp
 CROSS APPLY sys.dm_exec_query_plan(mp.plan_handle) qp;
 
WITH XMLNAMESPACES (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan'),
my_cte AS 
(
    SELECT q.query_hash,
           obj.value('(@Schema)[1]', 'sysname') AS [schema_name],
           obj.value('(@Table)[1]', 'sysname') AS table_name
      FROM #myplans q
     CROSS APPLY q.query_plan.nodes('/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple') as nodes(stmt)
     CROSS APPLY stmt.nodes('.//IndexScan/Object') AS index_object(obj)
)
SELECT query_hash, [schema_name], table_name
  INTO #myExecutions
  FROM my_cte
 WHERE [schema_name] IS NOT NULL
   AND OBJECT_ID([schema_name] + '.' + table_name) IN (SELECT object_id FROM sys.tables)
 GROUP BY query_hash, [schema_name], table_name;
 
WITH multi_table_queries AS
(
    SELECT query_hash
      FROM #myExecutions
     GROUP BY query_hash
    HAVING COUNT(*) > 1
),
lonely_tables as
(
    SELECT [schema_name], table_name
      FROM #myExecutions
    EXCEPT
    SELECT [schema_name], table_name
      FROM #myexecutions WHERE query_hash IN (SELECT query_hash FROM multi_table_queries)
)
SELECT l.*, ps.row_count
  FROM lonely_tables l
  JOIN sys.dm_db_partition_stats ps
       ON OBJECT_ID(l.[schema_name] + '.' + l.table_name) = ps.object_id
 WHERE ps.index_id in (0,1)
 ORDER BY ps.row_count DESC;

Caveats

So many caveats.
There are so many things that take away from the accuracy and utility of this script that I hesitated to even publish it.
Here’s the way I used the script. The list of tables was something that helped me begin an investigation. For me, I didn’t use it to give answers, but to generate questions. For example, taking each table in the list, I asked: “How hard would it be to get rid of table X and what would that save us?” I found it useful to consider those questions. Your mileage of course will vary.

October 26, 2018

Uncovering Hidden Complexity

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:15 pm

The other day, Erin Stellato asked a question on twitter about the value of nested SPs. Here’s how I weighed in:

Hidden complexity has given me many problems in the past. SQL Server really really likes things simple and so it’s nice to be able to uncover that complexity. Andy Yun has tackled this problem for nested views with his sp_helpexpandview.

Here’s what I came up with for nested anything. It helps unravel a tree of dependencies based on information found in sys.triggers and sys.dm_sql_referenced_entities. With it, you can see what’s involved when interacting with objects. Here’s what things look like for Sales.SalesOrderDetail in AdventureWorks2014. A lot of the resulting rows can be ignored, but there can be surprises in there too.

A lot in there

DECLARE @object_name SYSNAME = 'Sales.SalesOrderDetail';
 
WITH dependencies AS
(
    SELECT @object_name AS [object_name],
           CAST(
             QUOTENAME(OBJECT_SCHEMA_NAME(OBJECT_ID(@object_name))) + '.' + 
             QUOTENAME(OBJECT_NAME(OBJECT_ID(@object_name)))
             as sysname) as [escaped_name],
           [type_desc],
           object_id(@object_name) AS [object_id],
           1 AS is_updated,
           CAST('/' + CAST(object_id(@object_name) % 10000 as VARCHAR(30)) + '/' AS hierarchyid) as tree,
           0 as trigger_parent_id
      FROM sys.objects 
     WHERE object_id = object_id(@object_name)
 
    UNION ALL
 
    SELECT CAST(OBJECT_SCHEMA_NAME(o.[object_id]) + '.' + OBJECT_NAME(o.[object_id]) as sysname),
           CAST(QUOTENAME(OBJECT_SCHEMA_NAME(o.[object_id])) + '.' + QUOTENAME(OBJECT_NAME(o.[object_id])) as sysname),
           o.[type_desc],
           o.[object_id],
           CASE o.[type] when 'U' then re.is_updated else 1 end,
           CAST(d.tree.ToString() + CAST(o.[object_id] % 10000 as VARCHAR(30)) + '/' AS hierarchyid),
           0 as trigger_parent_id
      FROM dependencies d
     CROSS APPLY sys.dm_sql_referenced_entities(d.[escaped_name], default) re
      JOIN sys.objects o
           ON o.object_id = isnull(re.referenced_id, object_id(ISNULL(re.referenced_schema_name,'dbo') + '.' + re.referenced_entity_name))
     WHERE tree.GetLevel() < 10
       AND re.referenced_minor_id = 0
       AND o.[object_id] <> d.trigger_parent_id
       AND CAST(d.tree.ToString() as varchar(1000)) not like '%' + CAST(o.[object_id] % 10000 as varchar(1000)) + '%'
 
     UNION ALL
 
     SELECT CAST(OBJECT_SCHEMA_NAME(t.[object_id]) + '.' + OBJECT_NAME(t.[object_id]) as sysname),
            CAST(QUOTENAME(OBJECT_SCHEMA_NAME(t.[object_id])) + '.' + QUOTENAME(OBJECT_NAME(t.[object_id])) as sysname),
            'SQL_TRIGGER',
            t.[object_id],
            0 AS is_updated,
            CAST(d.tree.ToString() + CAST(t.object_id % 10000 as VARCHAR(30)) + '/' AS hierarchyid),
            t.parent_id as trigger_parent_id
       FROM dependencies d
       JOIN sys.triggers t
            ON d.[object_id] = t.parent_id
      WHERE d.is_updated = 1
        AND tree.GetLevel() < 10
        AND CAST(d.tree.ToString() as varchar(1000)) not like '%' + cast(t.[object_id] % 10000 as varchar(1000)) + '%'
)
SELECT replicate('—', tree.GetLevel() - 1) + ' ' + [object_name], 
       [type_desc] as [type],
       tree.ToString() as dependencies       
  FROM dependencies
 ORDER BY tree

July 9, 2018

The Bare Minimum You Need To Know To Work With Git

Filed under: Technical Articles — Michael J. Swart @ 9:00 am

I don’t like using git for source control. It’s the worst source control system (except for all the others). My biggest beef is that many of the commands are unintuitive.

Look how tricky some of these common use cases can be: Top Voted Stackoverflow Questions tagged Git. The top 3 questions have over ten thousand votes! This shows a mismatch between how people want to use git and how git is designed to be used.

I want to show the set of commands that I use most. These commands cover 95% of my use of git.
stupid git

Initial Setup

One-time tasks include downloading git and signing up for github or bitbucket. My team uses github, but yours might use gitlab, bitbucket or something else.

Here’s my typical workflow. Say I want to work on some files in a project on a remote server:

Clone a Repository

My first step is to find the repository for the project. Assuming I’m not starting a project from scratch, I find and copy the location of the repository from a site like github or bitbucket. So the clone command looks like this:

git clone https://github.com/SomeProject/SomeRepo.git

This downloads all the files so I have my own copy to work with.

Create a Branch

Next I create a branch. Branches are “alternate timelines” for a repository. The real timeline or branch is called master. One branch can be checked out at a time, so after I create a branch, I check out that branch. In the diagram, I’ve indicated the checkout branch in bold. I like to immediately push that branch back to the remote server. I can always refer to the remote server as “origin”. All this is done with these commands:

git branch myBranch
git checkout myBranch 
git push -u origin myBranch

Change Stuff

Now it’s time to make changes. This has nothing to do with git but it’s part of my workflow. In my example here I’m adding a file B.txt.

Stage Changes

These changes aren’t part of the branch yet though! If I want them to be part of the branch. I have to commit my changes. That’s done in two parts. The first part is to specify the changes I want to commit. That’s called staging and it’s done with git add. I almost always want to commit everything, so the command becomes:

git add *

Commit Changes

The second part is to actually commit the files to the branch with a commit message:

git commit -m "my commit message"

Push Changes

I’m happy with the changes I made to my branch so I want to share them with the rest of the world starting with the remote server.

git push origin myBranch

Create a Pull Request and Merge to master

In fact I’m so happy with these changes, I want to include them in master, the real timeline. But not so fast! This is where collaboration and teamwork become important. I create a pull request and then if I get the approval of my teammates, I can merge.

It sounds like a chore, but luckily I don’t have to memorize any git commands for this step because of sites like github or bitbucket. They have a really nice web site and UI where teams can discuss changes before approving them. Once the changes are approved and merged, master now has the changes.

Once it’s merged. Just to complete the circle, I can pull the results of the merge back to my own computer with a pull

git pull
git checkout master

Other Use Cases

Where Am I?
To find out where I am in my workflow, I like to use:

git status

This one command can tell me what branch I’m on. Whether there are changes that can be pushed or pulled. What files have changed and what changes are staged.

Merge Conflicts
With small frequent changes, merge conflicts become rare. But they still happen. Merge conflicts are a pain in the neck and to this day I usually end up googling “resolving git merge conflicts”.

Can’t this Be Easier?

There are so many programs and utilities available whose only purpose is to make this stuff easier. But they don’t. They make some steps easy, and some steps impossible. Whenever I really screw things up, I delete everything and start from scratch at the cloning step. I find I have to do that more often when I use a tool that was supposed to make my life easier.

One Exception
The only exception to this rule is Visual Studio Code. It’s a real treat to use. I love it.

Maybe you like the command line. Maybe you have a favorite “git-helper” application. No matter how you use git, in every case, you still have to understand the workflow you’re using and that’s what I’ve tried to describe here.

Where To Next

If you want to really get good at this stuff. I recently learned of a great online resource (thanks Cressa!) at https://learngitbranching.js.org/. It’s a great interactive site that teaches more about branching. You will very quickly learn more than the bare minimum required. I recommend it.

July 3, 2018

Shifting Gears in 2018

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 9:00 am

I wanted you to know about some changes coming to this blog. I’m shifting the focus from SQL Server to other technologies. Specifically, I’m going to explore and begin writing more about modern software development including things that have been labeled devops or site reliability engineering.

Shifting Gears

I’ve been looking for a new challenge for a while and I have an opportunity to do that by following the direction set by my company a few years ago. My company is embracing the public cloud for its price, its flexibility and its promise of scalability. Which public cloud? As awesome as Azure is, we’re going all-in AWS.

For me, this means new lessons to learn and new things to write about.

My Audience

My target audience for the new topics include

  • People searching google who want to find the answers to the same questions I learned recently.
  • The developer who is super-familiar with the Microsoft Stack (aka yours truly) but who wants to branch out into a new stack.

I hope that still includes you.

Blogging as a Student

I have no problems blogging as a learner. Just look at Kenneth Fisher (@sqlstudent144) and Pinal Dave (@SqlAuthority). They both began their blogs from the point of view of a learner. That word “student” is even there in Kenneth’s handle. And Pinal’s site is about his “journey to authority”, another colorful expression for learning. And they’ve done it. They’ve both successfully gained a reputation as an authority in their field.

My Topics

I’ve often straddled the line between a Developer and a DBA. I know a little bit about what it takes to keep SQL Server happy and healthy. I look forward to expanding my “Site Reliability Engineering” skills into new areas.

So for the next few weeks, I’ll start by blogging about the tools I use and what it takes to get started on a simple project.

It’s About the Arrows
Software architecture is often over-simplified as drawing boxes and arrows describing things (the boxes) and how they’re organized or how they communicate with each other (the arrows).

One thing I’ve noticed is that programs used to be the hard part. The classes, the objects, the algorithms. Now it seems to me, that the arrows are the hard part. Deployment, security, automation and all that network stuff can’t be deferred to another team.

The Arrows Are The Hard Part

The Arrows Are The Hard Part

I may specialize in something in the future, but for now I have no shortage of topics. I’ve been tracking my google search history: Here’s what that looks like for the past two weeks:

  • youtube getting started terraform aws circleci
  • tf examples getting started
  • terraform tf examples getting started
  • terraform deploy to aws
  • specify descending in primary key
  • codepipeline
  • aws code deploy
  • dynamodb ttl attribute
  • lambda to dynamodb tutorial
  • javascript add 4 months
  • add days to time javascript
  • javascript get guid
  • Handler ‘handler’ missing on module ‘index’
  • TypeError: Date.now is not a constructor
  • Date.now is not a constructor
  • unix timestamp 1 day
  • dynamodb set ttl example js
  • dynamodb DocumentClient
  • specify region in document client
  • aws.config.update region
  • lodash
  • visual studio code
  • visual studio code marketplace tf
  • visual studio code marketplace tf terraform
  • terraform dynamodb attribute type
  • terraform lambda example
  • terraform output arn
  • create role terraform
  • iam_role_policy
  • best way to terraform a role
  • script out role for terraform
  • terraform dynamodb example
  • invoke terraform in aws
  • how to test terraform
  • terraform download
  • aws command line
  • how to create a role using terraform
  • terraform grant a role access
  • deploy a role with terraform
  • create role assume role
  • terraform role trusted entities
  • push a new repository to github
  • provider config ‘aws’: unknown variable referenced ‘aws_secret_key
  • terraform aws credentials
  • aws_profile environment variable
  • set AWS_PROFILE
  • specify aws_access_key terraform
  • executable bash script
  • executable bash script windows
  • provider.aws: no suitable version installed
  • no suitable version installed
  • run terraform in circleci
  • run syntax circleci
  • run step syntax circleci
  • specify circleci environement variables
  • set password environment variable circleci
  • terraform “ResourceInUseException: Table already exists: broken_links”
  • terraform “ResourceInUseException: Table already exists:”
  • image hashicorp terraform
  • terraform EntityAlreadyExists
  • terraform backend dynamodb
  • canonical userid s3
  • deploy a lambda function terraform
  • terraform lambda runtime
  • resource “aws_lambda_function”
  • terraform archive_file
  • resource depends on
  • resource depends_on terraform
  • DiffTransformer
  • DiffTransformer trace
  • terraform archive_file example
  • depends_on terraform module
  • path.module terraform
  • windows path vs linux path terraform path.module
  • circleci zip directory
  • zip a file in shell
  • circleci zip
  • zip a file in circleci
  • working_directory circleci
  • zip directory for lambda
  • how to zip a file circleci
  • circleci apt-get zip
  • terraform export environment variables
  • run a shell srcript in terraform
  • steps in circleci
  • circleci artifact directory
  • build-artifacts circleci
  • store_artifacts
  • store variable in circleci
  • create file in terraform
  • output_base64sha256
  • concatenate in terraform
  • Unexpected value for InstanceType
  • Unexpected value for InstanceType terraform
  • terraform apply force
  • use artifacts terraform
  • get artifacts terraform
  • get artifacts circleci
  • use circleci artifacts
  • terraform file contents
  • terraform environment variables
  • use environment variables in terraform
  • var.Circle_artifacts
  • using environment variables in terraform
  • TF_VAR_CIRCLE_ARTIFACTS
  • set variables when calling terraform
  • use environment variables in circleci
  • multiline circleci
  • wrap line circleci
  • terraform pass variable to module
  • echo in circleci
  • persist to workspace circleci
  • attach_workspace persist_to_workspace
  • persist_to_workspace
  • debugging circleci
  • git merge all changes into one commit
  • dynamodb materialized views
  • query dynamodb from js
  • query dynamodb from
  • aws_lambda_function filename
  • AWS Lambda Developer Guide
  • bash zip command not found
  • linux create zip file
  • upsert dynamodb
  • updateexpression example js
  • dynamodb docclient javascript update expression
  • use UpdateExpression to increment
  • The provided key element does not match the schema
  • dynamodb multiple key
  • javascript multiline string
  • javascript md5 hash
  • hash a string javascript
  • md5
  • simple hash string javascript
  • hash a string javascript
  • md5 bit length
  • Every entry in that list that doesn’t have an obvious answer is a blog post idea.

    Giving up SQL Server?

    No, not at all, I suspect that most of my day job will still be focused on SQL Server technologies. When I come across something super-interesting. No matter what, I’ll write about it.

    Networking

    I’m excited. If you find yourself at AWS: Reinvent this fall, then let me know. Maybe we can meet for coffee.

    June 15, 2018

    ORDER BY newid() is an Unbiased Way To Randomize

    Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 9:47 am

    Mike Bostock is a data-visualization specialist. And it really shows in his blog. Every article is really well designed (which makes sense… many of the articles are about design).

    One of his articles, Visualizing Algorithms has some thoughts on shuffling at https://bost.ocks.org/mike/algorithms/#shuffling.

    He says that sorting using a random comparator is a rotten way to shuffle things. Not only is it inefficient, but the resulting shuffle is really really biased. He goes on to visualize that bias (again, I really encourage you to go see his stuff).

    Ordering by random reminded me of the common technique in SQL Server of ORDER BY newid(). So I wondered whether an obvious bias was present there. So I shuffled 100 items thousands of times and recreated the visualization of bias in a heat map (just like Mike did).

    Here is the heatmap. If you can, try to identify any patterns.

    Order By NewID Bias

    Where:

      columns are the position before the shuffle,
      rows are the position after the shuffle,
      green is a positive bias and
      red is a negative bias.

    I don’t think there is any bias here. The problem that introduces bias in Mike Bostock’s example is that his “random comparator” that he defined does not obey transitivity. His words. “A comparator must obey transitivity: if a > b and b > c, then a > c.”
    But in SQL Server, because each row is assigned a newid(), ORDER BY newid() doesn’t have that flaw and so it doesn’t have that bias.

    But Be Careful

    Although the method is unbiased, ORDER BY newid() is still inefficient. It uses a sort which is an inefficient way of shuffling. There are alternative shuffle algorithms that are more efficient.
    ORDER BY newid() is good for quick and dirty purposes. But if you value performance, shuffle in the app.

    April 6, 2018

    Are There Any System Generated Constraint Names Lurking In Your Database?

    Names for constraints are optional meaning that if you don’t provide a name when it’s created or cannot afford one, one will be appointed to you by the system.
    These system provided names are messy things and I don’t think I have to discourage you from using them. Kenneth Fisher has already done that in Constraint names, Say NO to the default.

    But how do you know whether you have any?

    Here’s How You Check

    SELECT SCHEMA_NAME(schema_id) AS [schema name],
           OBJECT_NAME(object_id) AS [system generated object name],
           OBJECT_NAME(parent_object_id) AS [parent object name],
           type_desc AS [object type]
      FROM sys.objects
     WHERE OBJECT_NAME(object_id) LIKE 
             type + '\_\_' + LEFT(OBJECT_NAME(parent_object_id),8) + '\_\_%' ESCAPE '\'
           OR
           OBJECT_NAME(object_id) LIKE 
              REPLACE(sys.fn_varbintohexstr(CAST(object_id AS VARBINARY(MAX))), '0x', '%\_\_') ESCAPE '\'

    This will find all your messy system-named constraints.
    For example, a table defined like this:

    create table MY_TABLE
    (
      id INT IDENTITY PRIMARY KEY,
      id2 INT REFERENCES MY_TABLE(id) DEFAULT 0,
      UNIQUE(id),
      CHECK (id >= 0)
    )

    Will give results like this:

    Happy hunting.

    Update: April 9, 2018
    We can get this info from the system views a little easier as Rob Volk pointed out. I’ve also included the parent object’s type.

    SELECT OBJECT_SCHEMA_NAME(id) AS [schema name],
           OBJECT_NAME(constid) AS [system generated constraint name],
           (select type_desc from sys.objects where object_id = constid) as [constraint type],
           OBJECT_NAME(id) AS [parent object name],
           (select type_desc from sys.objects where object_id = id) as [parent object type]
      FROM sys.sysconstraints
     WHERE status & 0x20000 > 0
       AND OBJECT_NAME(id) NOT IN (N'__RefactorLog', N'sysdiagrams')
     ORDER BY [parent object type], [parent object name], [system generated constraint name];
    « Newer PostsOlder Posts »

    Powered by WordPress