Open PicoMitchell opened 7 months ago
Yes, that behaviour is by design as it effectively counts non zero blocks.
Yes, a option can be added to change that behaviour so it aborts the verification on the first non zero block it encounters.
I'll add it to the project list.
Awesome, thank you!
Would it be possible for this option to also apply the built-in verify phase after an overwrite rather than only for explicit verify passes?
Also, I was curious about the failure behavior for overwriting as well. If nwipe
detects some error during an overwrite phase, does it continue on anyways? If so, would it be possible to have nwipe
stop at the first write error as well? Assuming this behavior could be set by the same option for verify failures. Maybe the new option could be something like --stop-after-failures [#]
where you can specify a number of failure to stop after. For our use-case we would want to stop after 1 failure whenever overwriting or verifying.
We would prefer this behavior since we are erasing donated drives for reuse in a refurb non-profit. If any drive fails health checks (via HD Sentinel), or fails during overwriting or verifying we will just physically destroy the drive. So, there is no need for us to continue taking time to overwrite or verify the rest of the drive once a single error has been hit.
As a somewhat related, by possible much bigger/different request. Would it be possible/feasible to verify at the same type as overwriting? So, each bit that is written is immediately verified and the process can stop right then if there is any failure. This way, with the erasure and verification passes combined the process could fail faster than doing a full overwrite and then failing during the verification. I believe this would make the behavior more similar to how badblocks
can operate, but we are hoping to use a tool that is more modern, robust, and maintained than badblocks
.
Finally, a bit more unrelated... looking into this verification behavior made me curious about how exactly --method=random
passes are verified. It appears as though an entire overwrite of random data is done, and then after that is complete a verification is done. How can this random data be verified if nwipe
doesn't have a record of what exact random data was written during the first pass? Is there some fancy way the same random data is being generated twice without having to store a separate complete record of the random data to verify against from the overwrite pass?
Would it be possible for this option to also apply the built-in verify phase after an overwrite rather than only for explicit verify passes?
Yes
Also, I was curious about the failure behavior for overwriting as well. If nwipe detects some error during an overwrite phase, does it continue on anyways? If so, would it be possible to have nwipe stop at the first write error as well? Assuming this behavior could be set by the same option for verify failures. Maybe the new option could be something like --stop-after-failures [#] where you can specify a number of failure to stop after. For our use-case we would want to stop after 1 failure whenever overwriting or verifying.
Immediately stopping on a I/O errors is nwipes default behaviour already. Others have requested an option to keep writing passed errors or reverse wiping.
As a somewhat related, by possible much bigger/different request. Would it be possible/feasible to verify at the same type as overwriting? So, each bit that is written is immediately verified and the process can stop right then if there is any failure. This way, with the erasure and verification passes combined the process could fail faster than doing a full overwrite and then failing during the verification. I believe this would make the behavior more similar to how badblocks can operate, but we are hoping to use a tool that is more modern, robust, and maintained than badblocks.
That's certainly possible, however it's the first time anybody has asked for that so it depends on how popular that might be as to whether it gets implemented or not
Finally, a bit more unrelated... looking into this verification behavior made me curious about how exactly --method=random passes are verified. It appears as though an entire overwrite of random data is done, and then after that is complete a verification is done. How can this random data be verified if nwipe doesn't have a record of what exact random data was written during the first pass? Is there some fancy way the same random data is being generated twice without having to store a separate complete record of the random data to verify against from the overwrite pass?
That's because it's pseudo random data. Each of the pseudo random number generators provided with nwipe is seeded with x number of bits. From these bits a stream of random numbers are generated. This stream of random numbers can be reproduced identically as long as you start with the same seed. Nwipe generates a random seed for each drive being wiped. It records this seed for use later in the verification pass then when nwipe exits all the seeds are forgotten, i.e not saved anywhere. The next lot of drives wiped then have new random seeds generated to start of the random number generators.
I believe this would make the behavior more similar to how
badblocks
can operate, but we are hoping to use a tool that is more modern, robust, and maintained thanbadblocks
.Genuine question: can you link to a modern alternative to badblocks?
It appears as though using the
verify_zero
method does not stop quickly on a drive that I know has data on it, while using something likebadblocks
with the option to stop after the first non-zero byte fails basically instantaneously.Is there some reason/use-case for why
nwipe
continues verifying a drive even after it has encountered a non-zero byte? Is it possible to change this behavior to stop verifying after the first non-zero byte, or to possibly add an option to havenwipe
stop verifying after the first non-zero byte it encounters?(I'm assuming this behavior/change would also apply to
verify_one
.)