LitKnd / littlekendracomments

1 stars 0 forks source link

Post: Shrinking SQL Server Data Files - Best Practices, and Why It Sucks #5

Open LitKnd opened 3 years ago

LitKnd commented 3 years ago

Imported comments from wordpress

Rob November 8, 2016 12:12 pm All great points Kendra. Thank you for this post.

Loading... Reply Priyanka November 8, 2016 10:08 pm Interesting post Kendra, Thank you so much for your effort.

Loading... Reply Nizami November 9, 2016 10:51 pm Thank you for the post, Kendra

Loading... Reply Martin Guth November 17, 2016 12:27 am Hi Kendra,

nice blog post! Could you elaborate a bit more on shrinking log files in specific? Is this less bad than shrinking data files?

Thanks

Martin

Loading... Reply Kendra Little November 17, 2016 9:07 am Hi Martin,

Great question! The log is completely different from data files. There is an excellent article on managing transaction log size on SQL Server Central in their “stairway” on Transaction Log Management: http://www.sqlservercentral.com/articles/Stairway+Series/94579/

It goes into a lot of detail to explain how the log works, why it might grow excessively, and how to prevent it from growing. The main issue that I see with folks and shrinking log files is that they get into a pattern of regularly re-shrinking it and don’t understand why it grew. (Which ends up being wasted effort– no point in shriking it if it’s just going to re-grow.)

The rest of the series is terrific as well. Kendra

Loading...

Reply Marios Philippopoulos November 6, 2017 3:52 am Hi Kendra, thanks for this post.

I am trying to shrink a large (3.6 TB) data file with only 0.5 TB used space, and running this command in chunks of 10 GB (to be able to have the space released as soon as it becomes available):

DBCC SHRINKFILE (N’filename’, 3674854); GO DBCC SHRINKFILE (N’filename’, 3664854); GO …

The odd thing is that each iteration completes with no errors, but the file size remains the same. I have checked for deadlocks in systemhealth, but there aren’t any. This is a new dev env on which I have exclusive use at the moment, so no contention from competing processes.

Can you think of any reason why this might be happening?

This is a SQL-2016 instance on which I have updated the db from SQL-2012 to SQL-2016 by changing the comp level and running DBCC UPDATEUSAGE.

Any advice would be welcome.

Thank you, Marios

Loading... Reply Kendra Little November 6, 2017 1:02 pm Could it be that you have a lot of ghost records still? This may be of help: https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-clean-db-file-free-space-transact-sql

Loading... Reply Marios Philippopoulos November 6, 2017 6:27 pm Thanks, I am trying that and will let you know how it goes. A bit of background: the huge amount of free space within the file resulted from the dropping of 2 xml columns from a single table. I wonder if that would have produced ghost records…

Loading... Reply Kendra Little November 9, 2017 11:09 am Aha! OK, I think you may need CLEANTABLE in that case: https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-cleantable-transact-sql https://dba.stackexchange.com/questions/117391/claiming-disk-space-after-removing-table-field

Loading... Reply Marios Philippopoulos November 10, 2017 3:25 am Thanks Kendra!

The deletion of the 2 xml columns was done quite a while ago (over 2 months now) in the prod env. The database was restored recently to the dev env on which I am attempting the shrink. So I would assume that the background process that cleans up the ghost records would have already taken place in prod?

Having said that, I will try this to see if it helps.

BTW, I have opened an MS case on this, and they have suggested that we rebuild all indexes on that file prior to the shrink. I have never seen this suggested before – does it make sense? 🙂

Loading... Kendra Little November 10, 2017 9:20 am Hi Marios,

We hit the threading limit on comments below so answering the thing about rebuilding indexes here. If you check out the docs on DBCC Cleantable (https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-cleantable-transact-sql) it mentions an alternative is to rebuild the indexes.

This isn’t related to ghost records (my first guess). This is related to dropping a variable length column (second guess).

So yes, their suggestion does make sense to me. Essentially DBCC CLeantable gives you the option of running in batches.

Loading... Reply