I remember when SQL Server 7 was released, it was such a big deal that you could easily grow and shrink database files, and indeed, you could have databases auto-grow, and, auto-shrink. That seemed pretty appealing if you wanted a low maintenance solution.
If only life was so simple.
One of the real challenges of auto-grow and auto-shrink is that you could have a database constantly changing size up and down, and that's quite nasty, because each time you grow, you're often using a different part of your file system.
We used to joke that auto-shrink should be renamed to auto-fragment my file system.
My take on the "auto" options is that auto-grow should be enabled, but you should proactively manage the space so that it never actually happens. And auto-shrink should never be enabled.
But what if you really need to shrink?
In general, free space in a database is your friend.
Sometimes though, there is a real desire to shrink. I might need to reduce the size of files before backing them up, because even though the backup won't be any different in size, the target location where I'm going to restore the backup mightn't have enough space.
I might also have just removed a large amount of data that's never coming back.
This is when people turn to using the options to shrink. Either they ask to shrink the whole database, or perhaps just some files. And sometimes, they'll be lucky and it will work as expected.
Does shrink work as expected?
However, what I see time and time again, are people that are struggling to get shrink to work for them. I was at a site a year or so back, and they had a 2 TB file that had 1.3 TB used, and they wanted to get back to about 1.5 TB. They started a shrink operation, and after about 2 hours, it was 63% complete.
They were happy.
After another week though, it was 61.5% complete.
They were distraught.
Shrink can be glacial performance-wise. Worse, I've seen applications with deadlocks during the shrink operation that don't normally suffer from them.
And worse, even if they do succeed, they can seriously mess up the fragmentation within your database. So you really need to consider rebuilding all the objects within it anyway.
So if shrink might not do what's needed, what on earth do you do?
I know that if you're reading this because you have a shrinking problem right now, you probably don't want to hear this, but the best option by far is to create a new database and copy all the data in. For the customer I mentioned above, the whole process took a couple of hours and it was over. Yes there was an interruption but it was relatively short.
Another option if you have appropriate use of filegroups and files happening, is that you might consider creating a new filegroup, and then rebuilding your objects across into that new filegroup. And when the original filegroup is empty, remove it. (That assumes you don't have everything sitting in a single PRIMARY filegroup though. That would make it much messier so see the first option and copy the data).
2 thoughts on “SQL: Shrinking SQL Server Databases usually isn't a good plan”
I've also struggled over the years with getting the shrinking of data files to ever complete on massive databases. Recently, I came across a solution that's working for me: iteratively shrinking the database a little bit, to say 99.9% of its current size. This means that you get the regular, progressive benefits as the data files become a little smaller, as well as having locks held for a shorter time.
Here's the script that I use (hard-coded to use fileid = 1, but could easily be extended to iterate through all data files):
DECLARE @LogicalFileName SYSNAME,
@Factor FLOAT = .999;
SELECT @LogicalFileName = name FROM sysfiles WHERE fileid = 1;
SELECT @TargetSizeMB = 1 + size * 8.0 / 1024 FROM sys.database_files WHERE name = @LogicalFileName;
WHILE @TargetSizeMB > 0
SET @TargetSizeMB = @TargetSizeMB * @Factor;
DBCC SHRINKFILE( @LogicalFileName, @TargetSizeMB );
DECLARE @msg VARCHAR(200) = CONCAT('Shrink file completed. Target Size: ', @TargetSizeMB, ' MB. Timestamp: ', CURRENT_TIMESTAMP);
RAISERROR(@msg, 1, 1) WITH NOWAIT;
WAITFOR DELAY '00:00:01';
What do you think of such an approach?
It can help but again it's using DBCC SHRINKFILE which can go very, very wrong, and also has the same issues with internal fragmentation of the file afterwards.