mylesagray / blog-comments

Comments for Blah, Cloud. Hugo blog
0 stars 0 forks source link

Zero free space using SDelete to shrink Thin Provisioned VMDK | Blah, Cloud #7

Open mylesagray opened 2 years ago

mylesagray commented 2 years ago

Written on 09/05/2013 08:38:21

URL: https://blah.cloud/infrastructure/zero-free-space-using-sdelete-shrink-thin-provisioned-vmdk/

mylesagray commented 2 years ago

Comment written by jmp on 03/22/2014 16:18:13

CCleaner allows you to wipe free space with 0s.

mylesagray commented 2 years ago

Comment written by Myles Gray on 03/25/2014 12:37:41

Never thought of that, handy to use I suppose, thanks!

mylesagray commented 2 years ago

Comment written by Magnus Gillevi on 05/15/2014 09:59:46

Will CCleaner still expand the thin-disk to its maximum size? .. Well i will soon find out....

mylesagray commented 2 years ago

Comment written by Myles Gray on 05/15/2014 10:13:55

Yes it will, anything that writes blocks intra-VM will expand the thin-disk

mylesagray commented 2 years ago

Comment written by Rob on 05/21/2014 12:57:20

I tested the procedure on a test VM and I'm ok with this, so thank you!; on the production server, though, I'd like to verify wheter my datastore has the necessary free space in order to the expand vmdk to full size, BEFORE actually running SDELETE.

Now, since I had a lot of deleted data recently (spring cleanups!) I have this numbers to work with:

Provisioned Disk: 1.5 T
Used Space (from Win OS): 562 Gb

VMDK actual size is 1.128 Tb so if I run SDelete I'm going to need about 400 Gb free space on the datastore, right?

mylesagray commented 2 years ago

Comment written by Myles Gray on 05/21/2014 17:19:11

So, your datastore size is what? 1.5T?

As long as your datastore size is larger than current (anything NOT on the disk to be expanded) usage + your thin disk's max size you are fine.

mylesagray commented 2 years ago

Comment written by Rob on 05/22/2014 07:06:43

datastore total is 2.73 T, but only about 700 Gb are free due to other VMs occupation; I noticed with a first pass of sdelete that the free space INSIDE the OS was constantly dropping while sdelete was running, can't figure why! after a ctrl-break it got back instantly at current value

mylesagray commented 2 years ago

Comment written by Myles Gray on 05/22/2014 07:36:34

Just so it's clear in my head:

Datastore: 2.73T (700G free)
Provisioned Thin-Disk Max Size: 1.5T (Already 1.128T in size)

So if you expand the thin-disk to its max size it will use another 0.372T meaning your Datastore free space AFTER expansion will be 0.328T.

So you're fine to run sdelete.

mylesagray commented 2 years ago

Comment written by Amit on 08/21/2014 16:55:22

Nice, succinct explanation, Myles. Am I correct in assuming the following:

1. In VMware, the Provisioned Space represents how much space I set the size of the TP VMDK disk to be, and Used Space is how much of that VMDK is no longer considered free space by VMware?

2. After running the steps outlined in your post, the free space in the VMDK will be aligned with the actual free space in the OS? e.g. 100GB provisioned, 100 GB used. After your steps, 60 GB used since Windows has 60GB used.

3. This is the critical part for me. After running your steps, I want to "fix" the free space listed in the parent datastore for this VM (or VMs). Does this happen automatically with the steps listed above?

or

To do so, since I have 5.5, I can run the 'esxcli storage vmfs unmap -l MyDatastore' command. Or is that just to wipe unused space at the hardware level for thin provisioned hardware LUNs and has nothing to do with what a datastore in ESXi lists as free space?

relevant link:

http://kb.vmware.com/selfse...

Thank you for your help! I hope VMware comes up with a solution for this that doesn't involve downtime for servers.

mylesagray commented 2 years ago

Comment written by Myles Gray on 08/21/2014 17:40:35

Hey Amit,

1) Correct, Provisioned Space is how much space you set the VMDK to be when creating the VM (so if the disk is 300G, it's 300G) - Used Space is the current size the thin-disk has expanded to out of those 300G.

2) Also correct, If you have a 100G thin-provisioned disk that has grown to say 80G through updates etc but Windows OS is only showing 60G usage after you run the steps above it will first grow to 100G, then you punchzero and it will align to 60G.

3) Yes this is an automatic operation - the DS will show free space based on the total of the difference between provisioned space and used space for all VMs on that DS.

On your 5.5 point, This is for Thin Provisioned LUNs (like EMC's VNX thin-LUN capability - page 17: https://www.emc.com/collate....

UNMAP is a SCSI3 command that is part of VAAI - so your storage array MUST support VAAI for this to work and also indicates it as lower level than the method for thin-provisioned VMDKs.

I doubt there can be a solution for this without downtime, as when you're punching zero you're essentially re-writing the entire VMDK.

Thin-provisioning is a bad practise IMO and only acts as an "I.O.U" method for allocating storage as you're essentially promising you'll have enough storage for all these VMDKs at some later date.

The reason I wrote this article originally was we were doing this to resize this disk and move it into smaller thick disks to stop storage-overcommit.

mylesagray commented 2 years ago

Comment written by Amit on 08/21/2014 18:43:08

Thanks, Myles. Unfortunately, at the time of setting up our environment, I went all Thin P disks across the board and gave plenty of extra space because I thought it wasn't a big deal. Going forward I will not allow so much extra space.

However, I somewhat disagree with you on your statement that thin provisioning is a bad practice. In most cases, the wasting of space is not all that significant over time. If it was a serious issue, I would have run out of space on my arrays ages ago.

I think thin is useful for servers where there are not a lot of deletions going on. Our Exchange servers are 3 TB in size but are wasting an additional 3/4 TB because of log file creation/deletion. As I said, instead of going thick, we are just going to be less generous with extra drive space. At the end of the day, if done correctly, it should save us significant space on our arrays without us having to diskpart C:/ drives all the time in a thick provisioned and tighter drive space scenario. YMMV, either way.

Anyway, thanks for the article; it was the easiest one to decipher and got straight to the point. A couple of things (correct me if I am wrong): The du -h [DISKNAME].vmdk should be DISKNAME-flat.vmdk if you are within the directory for the VM. If you are at the directory for the datastore, then du -h [VMNAME] gives you the size of all VMDKs within.

Also, you can just use vmkfstools -K [DISKNAME].vmdk to do the punch (don't use 'flat' here).

mylesagray commented 2 years ago

Comment written by Myles Gray on 08/24/2014 10:59:12

I had an inherited system that had thin disks on their exchange box - there was a 2TB VMDK size limit at this stage (5.5 makes it 64TB) and it filled up taking down not only the exchange but their production SQL box too - so I try to avoid them when I can - but like you say YMMV - suits some situations (like yours).

You're correct on the du -h - the -flat.vmdk only shows up in the VM dir, but not in the DS dir - so yes use those in those instances :)

-K does the same thing yes, I just used the verbose trigger to make it easier to understand for readers :)

Let me know how you get on!

mylesagray commented 2 years ago

Comment written by goslackware on 04/22/2015 13:52:19

I use "thin" everywhere. The only place that I'm actually "over-provisioning" is on the SAN level. My datastores, Luns, and Volumes are all under-subscribed, and most have 50% free space. All my SANs each have at least 50% total free space, so I'm not too worried about "over-provisioning" the SAN space for a long while. The only "outage" that could happen is if one of my SANs becomes full, then everything on it would crash, which is why I check on SAN free space periodically.

mylesagray commented 2 years ago

Comment written by Myles Gray on 04/22/2015 14:23:55

The point i'm trying to get at is it is much easier to monitor and hand-off a thick disk based environment.

IMO, if you can limit the outage possibility front by excluding over-provisioning on any level from your environment then that's the best way to run that environment.

However, each to their own, I just don't see the point in unnecessary risk, all that you are doing by thin provisioning is taking a "loan" of space you do have, at some point, you need to pay that back, so it's an economical illusion.

mylesagray commented 2 years ago

Comment written by yanir on 06/03/2015 14:32:10

for linux it should be:
dd if=/dev/zero of=/[PATH]/zeroes bs=4096 || rm -f /[PATH]/zeroes

because dd always fails due to lack of disk space :)

mylesagray commented 2 years ago

Comment written by Rocket on 09/27/2017 13:56:45

Hi,

hope somebody can help.
Is there any way to check if a Virtual Disk (VMware) was used or teated with "sdelete.exe -z" ?

I can not remember on which VMs i executed "sdelete.exe -z"

I would like to shrink the Virtual Disks because I need to safe as much space as possible on my Storage where the VMs are running.

Can somebody help?

mylesagray commented 2 years ago

Comment written by swadhyayaa on 10/09/2017 16:15:48

Good Article!
I have the following:
3par block storage = 9200 GB (6618GB used and 2582GB free)
vmware vmdk = 9410 GB (8980GB used and 1160GB free)
Would I have enough to inflate and then punchzero?
What is the formula here?

mylesagray commented 2 years ago

Comment written by Joseph on 12/05/2017 08:17:31

Man you Rock. I was facing the space issue on my datastore and i asked VMware support team.and they said "kindly validate with linux team on the command similar to SDELETE as we are unsure of the command for this operation on linux guest" !!!!!!!!!!!

I followed the method you mentioned and reclaimed the free space on datastore(i was using CentOS6)

Once again thank you. hope this will help others too.

below i ran on the Linux server;
[root@Myserver ~]# dd if=/dev/zero of=/home/zeroes bs=4096 && rm -f /home/zeroes

after theat i went datastore and run holepunching as below;

vmkfstools -K myserver.vmdk vmfsDisk: 1, rdmDisk: 0, blockSize: 1048576 Hole Punching: 100% done.

mylesagray commented 2 years ago

Comment written by Joseph on 12/05/2017 08:21:59

Small update
after theat i went datastore and run holepunching as below;

# vmkfstools -K CentosFBTWT-Two.vmdk

result for th eabove command is below;
vmfsDisk: 1, rdmDisk: 0, blockSize: 1048576
Hole Punching: 100% done.