Slowly Changing Dimension Implementations: Synchronous Triggers

In the previous post I went through what a Slowly Changing Dimension is and why they can be such a problem for people when designing database solutions. Here I’ll start to work through a few implementations. Note there is no real “this is the way to go” solution because each has its advantages and pitfalls and needs to be considered carefully by each architect for the design at hand but, hopefully, knowing a few solutions will mean that you can try and few and find one that you’re most comfortable with.

Synchronous Triggers

This is one that causes a lot of arguments within developers and architects alike because there is a long standing train of thought that “Triggers are bad and therefore shouldn’t be used”. Personally I don’t think there’s anything wrong with Triggers but that does come with the caveat that they’re used properly, implemented well, don’t hinder performance, and serve a specific purpose not best suited to any other object.

In the case of Slowly Changing Dimensions there are little options other than the trigger in order to track your changes. For this we’ll use the DimCompany table from my previous posts as our example and I’m going to use the “dbo and history schema” version of the Dimension… you can have fun working out triggers for the other types of implementation on your own:


So with the above, we would look to implement a trigger in the following way:


We can now run a simple insert and see what happens (record in the dbo table, nothing in history):

Now let’s update our record and see what happens (note that normally we would let the system deal with all dates in an audit but I’m hard coding them just to make them match what we saw in my previous demo scripts between dbo and history):

That’s exactly what we wanted.

This is the way in which I see nearly all slowly changing dimensions being coded and I therefore think it’s fine to use Triggers… just make sure that you’re not doing anything silly such as bulk inserts of 100,000 rows in individual transaction etc. otherwise you’ll definitely notice a performance problem quite quickly. Also, if you have a lot of processes updating records simultaneously, then maybe also consider Read Committed Snapshot Isolation so that your History tables don’t become a bottleneck.

Leave a Comment

Your email address will not be published. Required fields are marked *