This is actually an addendum to my last post as I referred to using SQL compression and provided a script in which you could compress a set of tables.
However, the problem is knowing which tables you should compress?
For example, which tables are good for compression and which are bad?
Well there are some in built procedures you can run in order to gauge…
Note that there is a flaw here because compression is actually an Enterprise feature and therefore if you're on Standard and wondering if you should upgrade to get the benefits of compression then you'll never know until you're already on Enterprise as these procs provided by Microsoft to work out compression gains are only in Enterprise Edition. Crazy, I know, but it is what it is.
So what you’ll need to do is to download a trial of Enterprise, install it somewhere, restore a backup, and THEN run this proc to see how you can benefit from compression.
Anyway, let’s presume you’ve done that… so how do you check the compression?
Well it’s this simple:
exec sp_estimated_rowsize_reduction_for_vardecimal 'sales.salesOrderDetail'
exec sp_estimate_data_compression_savings 'sales', 'salesOrderDetail', null, null, 'page'
The rowsize estimator requires some maths to work out your savings as it takes the average row size and works out what that average row size would be with ROW compression… whereas the second proc is a little better and shows you what would happen with PAGE compression for your table and all related indexes. It can also work with ROW as a parameter. Therefore I would always use the second of the procedures because it's much more informative.
Note that any compression will use more CPU than reading uncompressed data from disk, but this is generally small. I’ll do some more analysis in a later post, but for now I would highly recommend using compression if you’re on Enterprise Edition, are not CPU bound as a system, and want to have your databases and tables take up less of your buffer pool and disk.