Closed Legogizmo closed 5 years ago
@Legogizmo Thanks, I'll checkout your SyncChanges branch, review the changes and run tests on some bad discs, as long as I can find some discs that haven't already been scrapped !. I'll also test against a few good drives. Nice work, nwipe needed some work done on the code when dealing with faulty hardware and reporting errors in a appropriate and timely fashion.
If you have trouble finding bad drives I found that simply removing a drive mid wipe will do the trick.
Thanks @Legogizmo , Just a few thoughts, I've not had a chance to try it with any drives yet but having had a look at the code there is one thing that I've got a concern about in how it deals with a single or small number of block errors. i.e with fsync enabled it would abort the wipe on a single block failure rather than log the error and continue wiping.
Looking at the fsync error recovery code a fsync error is detected and the function in pass.c is exited. That's fine for a drive that goes offline or dies mid wipe, however if a drive has a single or limited number of multiple block errors the same thing happens, i.e. the wipe function is aborted if you are in fsync mode, it would be nice if it initially logged the failure and attempted to continue for a x number of writes.
Currently nwipes existing code would seem to plough on regardless (I need to verify that statement is actually true) i.e it would wipe as much of the disc as possible and it wouldn't matter that there were a few blocks that couldn't be wiped. Personally I would expect nwipe to wipe all readable/writable blocks that it could and log the failed blocks.
For a disc that's had a massive head crash or electronics failure and most if not all blocks are unwritable then the option should be to either attempt to write all blocks or abort after x number of blocks have failed.
Having the fsync option as default 0 for existing retains nwipes existing behavior, however it would be nice if fsync could be enabled but not abort on a small number of block failures.
There's one assumption I'm making here and that is that nwipe already logs a bad block and continues wiping, but as you've already pointed out, the write command might not return an error until you fsync.
I can certainly add an error counter or something so that the wipe continues until x number of errors occur.
My implementation allows you to set how often you do an fsync, if you set it to 1, then it will sync after every write and you could easily determine how many and which block is bad. But if you set it to 1000000 like I did then a failed fdatasync could mean there is only one bad block or several. Also I am not 100% certain on how fdatasync works, if it comes across a bad block does it ensure the following blocks are synced or does it stop?
(Also part of my change outputs the following warning nwipe: warning: Wrote 90112000000 bytes on '/dev/sdg'.
this is slightly miss leading since that is how many bytes were set to write before the fdatasync, a more accurate statement would be 86016000000 bytes written + 4096000000 bytes tried to sync and failed
)
Currently Nwipe does not track any bad blocks it only tracks if there was a partial write, it quits if a write fails completely.
I had a similar experience. Little bit of background info; I use a SATA to USB adapter (Ugreen) to connect my HDD to a laptop, it shows up as SCSI disk. First I tried normal DBAN from "Ultimate Boot CD" and connected the HDD to an older laptop (newer laptop in use) but the speed was so slow from older USB ports that a wipe would have taken at least 2 weeks to complete. Then I tried with newer laptop with USB3 ports but DBAN didn't work, it crashed every time ("Can't open '/proc/cmdline'") and none of the workarounds suggested by other users worked. Then tried NHellFire fork of DBAN (nightly) and it seemed to work but after 3 hours it crashed with "Error/dev/sda (Process Crash)".
Then I found nwipe and ran it through SystemRescueCD and everything seemed to be fine after starting the wipe and about hour later. But then after a good night sleep, I found nwipe wiping the drive with "1 B/s" speed, and it didn't go anywhere. I stopped nwipe with Ctrl+C and this was presented in the terminal:
It seems that nwipe incorrectly says that the whole drive was once wiped. And it didn't notice that the drive is no longer accessible, it really should have some kind of check(s).
I'm not sure why the HDD seems to "sleep" after some time, maybe the adapter does something or or.. I'm not an expert on this issue. Is it possible that nwipe could do something that would prevent the adapter/HDD from "sleeping" (or whatever it does)?
Every time after the HDD went to "sleep", NHellFire or nwipe (both running on some linux system) didn't detect the HDD anymore. Un-pluggin and re-inserting the USB cord didn't work, had to take power off the adapter/HDD completely AND reboot the linux system (and weirdly even then not every time the HDD was detected).
The HDD seems to be just fine, put it back to my Synology NAS, created new volume and Synology automatically ran a full disk check last night, then I ran quick SMART test and Seagate IronWolf Health check, nothing wrong. SMART details show no problems. Now in the process of copying the disk full of data in case that reveals some problem.
P.S. Is it possible to run nwipe from Synology? From terminal (SSH)? It would be awesome if nwipe could be installed from Synology Package Manager and ran from the Synology DiskStation Manager (GUI)!! Synology offers no features, or downloadable packages to securely wipe HDD's. I guess the only "wipe" they offer is "Secure Erase" but I guess that only works for SSD's. At least the option is greyed out for this HDD (and for my other HDD in the NAS).
Is it possible that nwipe could do something that would prevent the adapter/HDD from "sleeping" (or whatever it does)?
Could the "sync mode" (same as "Fdatasync"?) help with my problem? For example, if I could set nwipe to do a "sync" every 1 hour?
@Perkolator It sounds like the problem is with the drive or the connector, drives shouldn't be going to "sleep" regardless of how long it has been idle. That said my change is designed to stop trying to wipe a drive when it disconnects rather than hanging at 1B/s forever. You can set a sync to happen after x number of writes, rather than after x amount of time. I found that --sync=1000000
ends up doing a sync every couple of seconds (obviously depends on write speeds), I don't think it will stop the drive from failing, but it will let you know the drive disconnected a lot sooner.
obviously depends on write speeds
That's why I thought that a time setting would be more useful.
Still writing data to the disk and nothing seems to be wrong with it, so it might be the adapter, or something in the linux system that boots up before running nwipe (or other wiping software).
I'd like to try the sync option, but as I'm a linux noob, I don't know how to do that unless I get the updated version of nwipe from some OS (Linux Mint XFCE 19.2 on my old laptop, old version of nwipe (0.24 IIRC) or some other tool (SystemRescueCD, old version (0.25-1)). :(
Linux Mint XFCE 19.2 on my old laptop, old version of nwipe (0.24 IIRC)
Forgot to make a P.S. of this; why isn't the latest version available in the Mint/Ubuntu(?) PPA? Whose "responsibility" is it to update it?
@Perkolator It sounds like the problem is with the drive or the connector, drives shouldn't be going to "sleep" regardless of how long it has been idle.
Yep, I'd agree with that, I've lost count of the number of drives that start to intermittently fail, then by drastically lowering their temperature you can retrieve the contents before they finally die. Also the harder you work the drive as in the case of wiping the drive the warmer it gets and again that can result in intermittent failure or even total failure of a drive that's already on its way out.
@Legogizmo I've not forgotten your pull request, I've been away from the 'office' so no chance to test it yet. It's 2nd on my list. 1st on my list is to update ShredOS to the latest version of nwipe (0.26) with a P.R.
@Perkolator Better error handling for the actual write process in nwipe is on the radar and @Legogizmo P.R. will go some way to improving things.
Yep, I'd agree with that, I've lost count of the number of drives that start to intermittently fail, then by drastically lowering their temperature you can retrieve the contents before they finally die. Also the harder you work the drive as in the case of wiping the drive the warmer it gets and again that can result in intermittent failure or even total failure of a drive that's already on its way out.
I don't dispute what you say but I think that the fault in this case is the adapter (and possibly in combination the linux system). For example, I tried to access the HDD through the adapter in Linux Mint 19.2 and was not able to do that initially, the HDD/adapter did only show up with lsusb
but couldn't see it e.g. in gparted. I had to google and blacklist the adapter ID from using UAS and then I got proper access to the HDD in Mint. I have read that these adapters might have some issues, this is though my first experience. Maybe the longer and more demanding wiping is to blame, I don't know, but I have used the adapter successfully doing smaller things to few drives (HDD & SSD). HDD is still back in my NAS and I've written data to it non-stop for hours. No sign of problems yet.
Better error handling for the actual write process in nwipe is on the radar and @Legogizmo P.R. will go some way to improving things.
Good to hear that. Although if I read the PR changes right, the sync defaults to "0" aka "fdatasync after the disk is completely written", which means that people should be aware and know how to start nwipe with certain argument from the terminal. Not the ideal way IMO.
Forgot to make a P.S. of this; why isn't the latest version available in the Mint/Ubuntu(?) PPA? Whose "responsibility" is it to update it?
Sorry this is OT, but it would be awesome if somebody could educate my poor knowledge about this. Thanks and sorry.
@Perkolator
Linux Mint XFCE 19.2 on my old laptop, old version of nwipe (0.24 IIRC) Forgot to make a P.S. of this; why isn't the latest version available in the Mint/Ubuntu(?) PPA? Whose "responsibility" is it to update it?
This is quite a common question from those new to Linux, however, once you understand the entire process from you can at least understand why it can take so long to get out to a distro.
Most software on Github have multiple released versions, nwipe's latest release is 0.26, but we also have a release called a master, you could think of this as a beta or alpha version, i.e not for release. It's where programmers can submit their pull requests. At some point the master is released by the maintainer and would become 0.27 nwipes case. When he makes that release is entirely up to him, he may do it when there are no major bugs and he feels it's stable, he may not release it because he doesn't have time to release it. Very few people on here are getting paid to do this so you have to be patient and wait for a release. If you are not patient then there is nothing stopping you learning how to compile from source (the instructions are in the readme). So from master to release could be anything from a week to a year.
However that's just the release from Github. In nwipes case Martijn is a debian maintainer and can upload to debian SID, debians unstable release, from there at some point, unstable will become testing and eventually will be a debian release. At some point ubuntu will pick up that release and incorporate that into their own release cycle, then Mint will at some point incorporate the ubuntu release and incorporate into their release cycle. It's probably not surprising that a year (and often a lot longer can go by before the 'latest' release ends up in a distro.
However the distro you choose has a lot to do with how quick it follows the upstream releases. For instance my day to day Distro is KDE Neon. Why do I choose that distro?. The main reason is because it updates to the latest KDE software within weeks of release. However it's based on ubuntu 18.04LTS which contains versions of software that may be a couple of years behind in versions. KDE software is more important to me than the other software being up to date and if I really need the latest and greatest I'll compile from source.
Having said that, I'm just about to update shredOS with 0.26. Currently you have to compile shredos form source, however I am thinking about releasing the image file shredos.img. With a single dd command you can burn this to a USB stick and you have the latest 'released' nwipe booting in less than 6 seconds. The latest nwipe could then be used by people with no interest in compiling from source or even running linux.
@Perkolator I don't think you have mentioned what make/model of hard disk you have, is it a 3.5" or 5.25". If the drive is plugged into the USB port has it a separate power supply or is it getting it's power from the USB port ?
Depending on the drive and your USB ports, they may or may not be upto providing sufficient power to the drive which could result in all sorts of issues. Does your laptop have a Esata port ? If it does I would definitely be connecting to that, not the USB as the wipe would be much faster, but you would need to provide a separate source of power for your drive.
As regards the power capability of USB ports
The USB 1.x and 2.0 specifications provide a 5 V supply on a single wire to power connected USB devices. A unit load is defined as 100 mA in USB 2.0, and 150 mA in USB 3.0. A device may draw a maximum of 5 unit loads (500 mA) from a port in USB 2.0; 6 (900 mA) in USB 3.0.
The operating current pulled by a Seagate 2TB or 3TB drive is typically 0.510 amps, which exceeds the power requirements of a USB2 port.
May not be the issue but certainly something to be aware of. You need to take a look at the specs of the drive.
However that's just the release from Github. In nwipes case Martijn is a debian maintainer and can upload to debian SID, debians unstable release, from there at some point, unstable will become testing and eventually will be a debian release. At some point ubuntu will pick up that release and incorporate that into their own release cycle, then Mint will at some point incorporate the ubuntu release and incorporate into their release cycle. It's probably not surprising that a year (and often a lot longer can go by before the 'latest' release ends up in a distro.
I think I have misunderstood what PPA means, I mean the Mint/Ubuntu repo. But still, I'm confused, are you trying to say that updates to software that are in the OS repo are only done when a new release of the OS is built? That doesn't sound right to me at all. In fact it sounds very idiotic system, especially when thinking about some possible critical updates/fixes to a software. What about constant updates to installed software that are served in e.g. Linux Mint (I'm not talking about point release updates, e.g. 19.1, 19.2, etc.)? Just trying to move to linux and I thought that the software updates would be much easier to handle than in Windows where you need to keep track of updates of every software and update almost everything manually. And now I'm learning about software left outdated, adding PPA's for different developers/software, Flatpak/AppImage/Snap war that seems to be going on (Snaps don't sound that great, e.g. do people really want automatic updates!?). I might end up using Windows after all... or I'll throw every electronic device to dumpster and start to live care-free life. :)
Having said that, I'm just about to update shredOS with 0.26. Currently you have to compile shredos form source, however I am thinking about releasing the image file shredos.img. With a single dd command you can burn this to a USB stick and you have the latest 'released' nwipe booting in less than 6 seconds. The latest nwipe could then be used by people with no interest in compiling from source or even running linux.
That sounds great. And if you ever release it as image file, it would be found from your repo or from the original repo?
I don't think you have mentioned what make/model of hard disk you have, is it a 3.5" or 5.25". If the drive is plugged into the USB port has it a separate power supply or is it getting it's power from the USB port ?
3.5 inch Seagate IronWolf NAS 6TB. The adapter came with power adapter for 3.5" disks (2.5" disks don't need it though).
Does your laptop have a Esata port ? If it does I would definitely be connecting to that, not the USB as the wipe would be much faster, but you would need to provide a separate source of power for your drive.
Nope, no esata. I'm not aware of how much faster that would be, I got over 240-250 MB/s wiping speed with this adapter (obviously it's going to slow down the closer it gets to the outer edge of the platters). I was amazed of the speed, so much in fact that I had to google if it's even possible to gain such speeds from this disk.. but it really seems that way. BTW. Now when I'm copying data to the drive (in the NAS, copying from another HDD in the NAS, not through network) i'm getting only little bit under 80 MB/s, which is a bit odd, I get more if I copy data from my laptop through gigabit network, ~110 MB/s.
That sounds great. And if you ever release it as image file, it would be found from your repo or from the original repo?
To be decided but I may set up a website just so it can be download from there. The idea being to make it as uncomplicated as possible.
I think I have misunderstood what PPA means, I mean the Mint/Ubuntu repo
Ah PPA's, yes we could provide an untrusted PPA so that earlier versions of ubuntu could benefit from the latest version. And there is absolutely nothing stopping anybody setting up a PPA. Just not me as I've got enough to do fixing bugs.
I'm confused, are you trying to say that updates to software that are in the OS repo are only done when a new release of the OS is built?
No, software does get updated in older releases but it's usually security updates not feature updates and again it depends on the distro. In KDE Neon I get the latest KDE applications every few weeks.
3.5 inch Seagate IronWolf NAS 6TB. The adapter came with power adapter for 3.5" disks (2.5" disks don't need it though).
So it has it's own power adapter which your using which is good, as it's 12v startup current is 2amps !
I got over 240-250 MB/s wiping speed
You must have it plugged into a USB3 port, as you would never get that on USB2. Nice to know somebody is using USB to wipe drives. I might have to push the USB serial number feature up the list. Normally with a drive plugged into a SATA, ESATA or IDE port nwipe displays the serial number of the drive along with the model number but plugged into a USB port the serial number is missing. This is because I (or somebody) needs to write the code to access that info from the drive. The method is different depending on whether it's SATA etc or USB.
No, software does get updated in older releases but it's usually security updates not feature updates and again it depends on the distro.
Still trying to understand what I was initially after; who makes that push to the repo? Is it the repo maintainers duty to check all possible fixes/security updates to the software in the repo, or does the software developers themselves have to "request" that this new fixed version should be reviewed(?) and added as soon as possible to the repo?
Looking at the changelog I gather that there might not be security fixes in the 0.25 release, but some really needed fixes. Personally I think that a software that does wiping of peoples personal/business data should be considered a bit higher priority than other software when it comes to keeping it updated in the repo and peoples computers.
v0.25
- Correct J=Up K=Down in footer(Thanks PartialVolume)
- Fix segfault initialize nwipe_gui_thread (Thanks PartialVolume)
- Fix memory leaks (Thanks PartialVolume)
- Check right pointer (Thanks PartialVolume)
- Fix casting problem (Thanks PartialVolume)
- Fix serial number
- Fixes uninitialized variable warning ( Thanks PartialVolume)
BTW. I think that it would be great if changelogs would be added to the releases page and not only to the README file.
You must have it plugged into a USB3 port, as you would never get that on USB2.
The newer laptop has great USB-ports, the older laptop has maybe even USB1.1 or something like that, with that, the wiping speed was ~26 MB/s.. so obviously I had to try to use the newer laptop.
Nice to know somebody is using USB to wipe drives.
I'd like to know that somebody. ;) Because I'm not successful. I tried all I can but something is wrong. This household has only got laptops and one NAS device, so while I can't do any wiping with/in Synology AFAIK, I don't have much choice but to try to use an adapter, which seems not to work at all.
Normally with a drive plugged into a SATA, ESATA or IDE port nwipe displays the serial number of the drive along with the model number but plugged into a USB port the serial number is missing.
I don't know the technical side that well but it might be that the HDD is connected/detected as SCSI with this adapter (and I guess with all UAS/UASP adapters).. if that matters at all in what you're trying to do. This is what lsusb
said in Mint 19.2:
Bus 002 Device 003: ID 174c:55aa ASMedia Technology Inc. ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge
I have noticed that if a drive dies during a wipe, nwipe won't notice until after it finishes "writing" the disk full. This results in one of two failure behaviors. The first is a super slow wipe. The wipe throughput will drop drastically even going as low as "1 B/s" (though it is actually "0 B/s"). Check the images below,
sdl
displays the behavior.The second is the "oblivious" wipe. The wipe will continue to "write" to the disk and show a reasonable throughput even though the drive has been disconnected from the system, Nwipe won't notice anything is wrong until the end.
This is due to how the pass functions are structured, they enter a while loop where they continuously write to the disk and won't stop unless there is an error. The problem is
r = write( c->device_fd, b, blocksize );
may not give an error even if the disk is dead/missing/bad. From write() man page "A successful return from write() does not make any guarantee that data has been committed to disk. The only way to be sure is to call fsync(2) after you are done writing all your data." Ther = fdatasync( c->device_fd );
preformed at the end of the pass process is the only point where we verify that the writes have actually made it to the disk, and even then nwipe simply logs the error and continues on.The solution I have worked on is to simply preform periodic fdatasyncs in the while loop, and if it fails return -1. For my testing I ran nwipe with
--verify=last -m zero -r 1
and did fdatasync after 1000000 writes (With the average 4096 block size write, comes to syncing every 4gigs).While testing this solution I came across some strange performance behavior. When running with the periodic fdatasyncs I noticed that some drives saw a massive increase in write times, in some cases it took over twice as long. This behavior was limited to individual drives, I had 2 drives that were the same model and had the same firmware and only one showed the massive increase in wipe times. However when I went and removed the blocksize change (mentioned in issue #96 ) wipe times returned to their previous overall rates (but some write times decreased while some read time increased), this is believed to be the result of having the write and read sizes being the same as the block size. (previously the write and read size would be the old block size and the new block size would be the sector size)