This is just a quick post as it’s something I came across the other day and I thought it warranted a note and word of caution.
Basically I was alerted that replication had ground to a halt and was suffering badly after a developer had performed an update. Their logic was “it was only 100,000 rows, it should be able to cope with that”.
Ordinarily I would agree with that completely. Replication should easily be able to handle this, but what the developer hadn’t realized was the consequence of the type of update they had done.
Sadly the developer had forgotten the principle of the 8k page and that’s what had caused the problems.
Each page in the database was full and contained approx. 4 records. Therefore the 100,000 records were taking up around 25,000 pages on disk.
The issue was that the developer had updated every single record, but had changed the width of the record in the process. A varchar(50) had only ever contained a 40 character field, but they were updating these to be 45 instead.
The result of this was that every single page of the table was forced to split. This, as we know from my older post (found here
) can have a devastating impact on reads but it also has a heavy impact on the transaction log.
In replication we rely on the transaction log as that log reader needs to trawl this looking for records to pass to the distribution database.
Therefore filling the transaction log with huge swathes of data due to page splits was simply causing the log reader to have a VERY hard time.
The knock on effect of this was that all replication (which shares a log reader per database) started to become sluggish and complaints arrived.
A quick education for the devs and they won’t be doing that again. But I thought I would mention it here so that you don’t have the same problem too… go find your devs and stop it before it happens.