This is vital to the health and safety of your data and databases. Running regular consistency checks will allow you to spot any corruption within your database at any and all levels. Nearly all databases will have some element of corruption at some time or other and therefore you need to be prepared.
The way SQL Server deals with this is to provide a DBCC command to run against the database which will highlight any corruption it finds. That command is DBCC CheckDB.
If corruption gets into your system then there are some repair options available within CheckDB itself (refer to books online for these) but depending on the error you could find that the only option presented allows data loss… and you don’t want that. So what is the best course of action? Well it’s nearly always to refer to a recent backup. With this you can maybe restore an individual page or just do a point in time restore right up to the point at which the corruption occurred.
However, problems quickly arise if your database has been corrupt for some time and therefore all your backups contain the same corruption. What do you do then? Well there aren’t many choices at all.
To avoid this situation DBCC CheckDB should be run against every database at least once a week, preferably more for busy systems.
Now, if you’ve been using this command for as long as I have then I know what you’ll be saying… “It’s too heavy on the system, I can’t run it that frequently”.
To be honest, you’re right. DBCC CheckDB is very heavy on reads (well, it checks every page in your database) and also with memory (the pages need to be put somewhere)… but there are still ways in which to accommodate and they should definitely be considered before disregarding consistency checks altogether.
One option would be to run select parts of the DBCC CheckDB command rather than that whole thing. That way you can effectively run the entire of DBCC CheckDB over the course of a week, but in less impacting stages.
Internally DBCC CheckDB calls the following processes:
- DBCC CHECKALLOC
- DBCC CHECKTABLE
- DBCC CHECKCATALOG
Another option would be to run it with the option “NOINDEX” which will validate the consistency of your database and data pages, but will not intensively check your non clustered indexes. (Obviously it does do Clustered indexes as they are your data - but you already knew that). Therefore this option can reduce the overall execution time of the run.
However, by far the best option if you want zero impact on your live system is to take your full backup, restore it to a test machine, and run DBCC CHECKDB against that. This way it doesn’t matter if it takes all day, it still won’t impact live and yet you’ll know if your database and backups are consistent.
Whichever method you choose though, just make sure it’s at the very least once a week… else you could be in for a shock when something goes badly wrong.