Open jpmat296 opened 7 years ago
choco install sdelete -version 1.61.0.20160210
I copied it to my dropbox account to public folder and changed the link in compact.bat
The previous version of sdelete
can be downloaded from the following the web archive URL.
http://web.archive.org/web/20140902022253/http://download.sysinternals.com/files/SDelete.zip
Thanks for the link.
The MD5 hash is e189b5ce11618bb7880e9b09d53a588f which verifies as the genuine version.
For others who may land here due to SDelete 2.00 being slow and are wondering where the checksum was recorded for the 1.6.1 version so they can manually verify their download (e.g. from archive.org) see https://github.com/dtgm/chocolatey-packages/blob/85da0675db2d4f14167d29a003b3529e572cd3c5/automatic/_output/sdelete/1.61/tools/chocolateyInstall.ps1 . The SHA1 checksum for the SDelete.zip (version 1.61) there is a7c5b5b25cfcc6d9609d7aec66e061a0938d4f9a . When I MD5 sumed a downloaded zip that matched the SHA1 sum for SDelete.zip (version 1.61) I got a hash of 239cc777df708437f5a29959d4b17d53 .
I also discovered https://connect.nimblestorage.com/thread/1513 which suggests that the following script is a much faster (and zero-dependency) alternative:
##########################################################################
# Written By: David Tan
#
# V1.0 29/01/2014 Davidt Fast Space reclaimer.
#
# Note: Concept and code parts taken from http://blog.whatsupduck.net/2012/03/powershell-alternative-to-sdelete.html
#
# Uses powershell method to generate large (1GB) file containing 0. Re-copies this file until <1GB free.
##########################################################################
param (
[string] $FilePath,
[string] $LogFile,
[int] $CycleWait
)
Function DispMessage ([string] $Message) {
[string] $DateStamp = get-date -format "yyyy-MM-dd HH:mm.ss"
Write-Host "[$DateStamp] $Message"
Add-Content $LogFile "[$DateStamp] $Message"
}
Function SleepWait ([string] $Sleeptime) {
sleep $Sleeptime
DispMessage " --> Sleeping $Sleeptime sec"
}
$LogFile = "C:\temp\NimbleFastReclaim.log"
$FilePrefix = "NimbleFastReclaim"
$FileExt = ".tmp"
If ($FilePath -eq "") {
Write-Host "- Filepath <driveletter or mountpoint>"
Write-Host "- LogFile (DEFAULT=$LogFile)"
Write-Host "- CycleWait(s) (DEFAULT=0)"
Exit 1
}
If ($FilePath.substring($FilePath.length - 1, 1) -ne "\") {
$FilePath = $FilePath + "\"
}
$ArraySize = 1048576kb
DispMessage "--> Starting Reclaim on $Filepath ... "
DispMessage "--> Cycle Sleep = $CycleWait sec"
DispMessage "--> File Size = $($ArraySize/1024/1024) MB"
$SourceFile = "$($FilePath)$($FilePrefix)0$($FileExt)"
Try {
DispMessage " -->Writing $SourceFile"
$ZeroArray= new-object byte[]($ArraySize)
$Stream= [io.File]::OpenWrite($SourceFile)
$Stream.Write($ZeroArray,0, $ZeroArray.Length)
$Stream.Close()
$copyidx = 1
while ((gwmi win32_volume | where {$_.name -eq "$FilePath"}).Freespace -gt 1024*1024*1024) {
$TargetFile = "$($FilePath)$($FilePrefix)$($copyidx)$($FileExt)"
DispMessage " --> Writing $TargetFile"
cmd /c copy $SourceFile $TargetFile
$copyidx = $copyidx + 1
If ($CycleWait -gt 0) {
SleepWait $CycleWait
}
}
DispMessage "--> Reclaim Complete. Cleaning up..."
Remove-Item "$($FilePath)$($FilePrefix)*$($FileExt)"
DispMessage "--> DONE! Zerod out $($copyidx*$ArraySize/1024/1024) GB"
}
Catch {
DispMessage "##> Reclaim Failed. Cleaning up..."
Remove-Item "$($FilePath)$($FilePrefix)*$($FileExt)"
Exit 1
}
The author suggests that using robocopy
(which is multi-threaded) instead of copy
may yield further performance beyond the 1TB/hr he saw, but what he got was fast enough for his use.
WARNING: This will completely expand a virtual machine disk as it fills the volume with file(s).
I would prefer switching to
Install-Module WindowsBox.Compact -Force
Optimize-DiskUsage
as used in
https://github.com/windowsbox/packerwindows/blob/master/provision.ps1
Note: If your disk is backed by supported "thin" storage (e.g. a Hyper-V dynamic sized VHDX) then Optimize-Volume can do a Re-Trim which will be far more efficient at quickly freeing large amounts of empty space than trying to scribble zeros everywhere that is unused.
Also note that any attempt to use only a single file to zero out the unused space will potentially leave areas uncleaned - see How SDelete Works for the steps it tried to take to address that issue. In a perfect world you would have something that would make a pre-sysprep boot filesystem check do all this work while the filesystem was unmounted...
Module WindowsBox.Compact has the same issue as for @petemounce script: it will completely expand a virtual machine disk as it fills the volume with file(s).
Well does anybody know if Optimize-Volume helps for all hypervisors? We have parallels, virtualbox, vmware in this repo and probably hyperv in near future.
@StefanScherer sadly I doubt that you can make Optimize-Volume trim everywhere - even on Hyper-V you only get it on dynamic disks and not fixed size ones. You need the hypervisor to "want" to provide a thin disk AND implement and advertise the commands that allow Windows to say which bits of the disk are unused in a deterministic fashion. Even then the emptied bits may not disappear from the backing until the backing is compacted...
This is a case where the ideal tool doesn't appear to exist yet. For example, zerofree for Linux's ext filesysytem works on unmounted file systems and looks to see if a free area is already zero and if it is then it skips over it. By combining it with a prior trim it's possible to get a better solution for that Linux file system (free space is quickly zeroed if you can trim and even if you can't trim you don't grow the disk while you scrub the empty areas when you do the zerofree) see https://unix.stackexchange.com/a/251804 . However that's all an aside and the real question is - does anyone know if it's possible to do better than the powershell script in general when on Windows - perhaps not?
My 5-cents to Nimble's zeroing PS script (published above):
Despite above - great example of powershell use for every admin. Everyone can make changes of his/her choice within 5 minutes making this tool avesome.
PS: none of changes to code (requested above) prohibit the volume of grow to it's limits once the script is used, however once you use thin provissioned disk array (f.e. Nimble,3PAR), it should make no difference, in the big picture, to the results, It is wise anyway to contact your HW vendor as technologies can vary. Side effect of using the script is getting better results for hardware thin provisioning and compression.
Tx a lot, Nimble team.
For me, the execution of sdelete never finished, even after 48 hours. It is because the new version 2.0 has performance issue as explained here: http://forum.sysinternals.com/sdelete-hangs-at-100_topic32267.html
I didn't find an URL to old version 1.61 to fix.