Closed chkboom closed 2 years ago
Wouldn't it be more resourceful to throw out the Gutmann method and channel time and coding efforts towards the other more useful methods. Just throwing this out there to hear what you guys think. Clearely chkboom is reading the papers and actually converts the research into (better) code. Hats off for that. But I hate to see this going into a wiping method that has no real world meaning these days any more.
I say keep it in, it's actually not much additional code and serves as a good reference. Also helpful for cleaning old gear that uses the RLL/MFM encodings Gutmann had in mind.
Besides, speaking of coding efforts, nwipe allows you to change the pseudo-random number generator.
I can see both points of view, here's my take on it.
Throwing out the Gutmann method: While I agree with @Firminator that nobody in their right mind should be doing a 35 pass wipe on any non RLL/MFM drive and nowadays it's not relevant, I don't agree with throwing it out for the following reasons. I suppose this is more of a marketing reason rather than engineering reason but the more methods we have the better. Even though we all probably know a lot of those methods are simple variations on a theme. Sometimes it looks like a lot of willy waving between various intelligence agencies that they feel the need to invent their own variation but despite that, the perception amongst the non technical general public, is that the more wipe methods there are, the better a wipe program must be. So for two specific cases I would leave it in, that one person in the world that wants to wipe RLL/MFM drive, could be a museum or some long lost archive but they don't want to destroy the drives and the other reason, is that we need to double the number of methods we have in order to exceed a certain other commercial company's (beginning with B) total of 22 wipe methods.
Enhancing the existing Gutmann method As regards changing the existing Gutmann wipe, if the existing method has followed the Gutmann documented method I would want to leave it as it is, unless it's a correction to the misinterpretation of the documented method.
Adding a new Gutmann Enhanced However I have no issue with adding a new method called 'Gutmann enhanced' with the changes @chkboom proposed. However that would be up to @chkboom whether he wants to do that. Adding a new method based on an existing method isn't too much work.
As regards whether time should be spent on this, I think so much depends on personal opinion. I think that programmers should work on whatever they want when it comes to unpaid work, especially when they are new to a project, as working on any part of the code is a good way of learning how the whole program is organised. I started work on nwipe by fixing it's numerous segmentation faults as I love fixing intermittent random faults and by doing that learned a huge amount about the entire program.
My personal feature list for nwipe/ShredOS is for three extra features, 1. ATA secure wipes, HPA/DSO detection & correction and production of a professional looking A4 ShredOS/nwipe pdf certificate and would love for a graphics designer that's a genius with gimp to come up with something that could be used as a background image with an nwipe theme. I can provide the text that overlays the A4 jpg using some small pdf related pdf writer functions. But these features are things I personally want. I would be more than happy if somebody came along and added those features as there is only limited time I can spend on this but totally appreciate that those features might not be what programmers want to spend their time on.
@chkboom I've not looked at your code yet, but I think you are proposing the Gutmann method becomes 37 wipes instead of 35. If you are happy putting your changes into a new method called 'Gutmann Enhanced' it would down to you to add that method. Having said that, if you are happy adding any more of the standard wipe methods, such as CSEC ITSG-06, NSA 130-1, German VSITR, US air force AFSSI5020, NSA's random random zero method, or any of the many other methods, then please feel free.
I've had a quick look at Perter Gutmann's paper and I can see that you've followed his suggested enhancement by randomising the deterministic part of the wipe. The random wipes before and after already existed in the code.
This changes my earlier comment, because the change is fairly minor and is documented in his procedure. So to be honest I'd let that be merged as is but with one change. It's name should be 'Gutmann Enhanced' as it includes both his suggestions.
We now have a set of 22 overwrite patterns which should erase everything, regardless of the raw encoding. The basic disk eraser can be improved slightly by adding random passes before and after the erase process, and by performing the deterministic passes in random order to make it more difficult to guess which of the known data passes were made at which point. To deal with all this in the overwrite process, we use the sequence of 35 consecutive writes shown below:
@chkboom How much testing have you done on this change, in particular testing whether each wipe is doing what it's supposed to be doing?
One option I've been meaning to add is a auto pause on completion of a each pass so that you can take a quick look at the disc with a hex editor, this would be really useful from a test point of view but also useful for a user that wants to do their own checks on the wipe. The wipe would continue after pressing the spacebar. The option would be something like --pausepass. There would need to be a small amount of work in the GUI with the messaging and using the appropriate method for pausing within a thread. I don't know whether your interested in writing such a thing? I'm trying to off load it :-) but understand if your not interested.
helpful for cleaning old gear that uses the RLL/MFM encodings
Is there some kind of database of known RLL/MFM drives that nwipe could autodetect and then auto-propose the Gutmann method as the most appropriate wiping method. I've seen that FDDs are using MFM so if someone needs to wipe floppies that would be a first good auto-use of the Gutmann method :) On a second thought they say RLL/MFM HDD drives are pre-IDE era so you need old (25years+) hardware. I doubt that ShredOS will even boot on hardware that old.
I think that programmers should work on whatever they want when it comes to unpaid work, especially when they are new to a project, as working on any part of the code is a good way of learning how the whole program is organised. I started work on nwipe by fixing it's numerous segmentation faults as I love fixing intermittent random faults and by doing that learned a huge amount about the entire program.
Thanks for pointing this out. Made me rethink what I wrote in my OP earlier. I totally agree.
On a second thought they say RLL/MFM HDD drives are pre-IDE era so you need old (25years+) hardware. I doubt that ShredOS will even boot on hardware that old.
That made me smile, my day job involves maintaining some systems that use computer technology from circa 1992, every Intel processor from 486 and newer plus Sun systems from Ultra 1. Still all in commercial use that are doing important work. They just keep going, once every 5-10 years maybe change a power supply. They will probably be gone in another 5 years .. maybe :-)
I think there is a challenge there. In theory the 486 & maybe the 386 should run shredos, if I build the kernel for a 486 processor, nwipe only uses about 456K of memory and 3.8M of shared memory so the specs for the 486 are as follows:
Like the 386DX, the 486 can address 4GB of physical memory and manage as much as 64TB of virtual memory. The 486 fully supports the three operating modes introduced in the 386: real mode, protected mode, and virtual real mode. In real mode, the 486 (like the 386) runs unmodified 8086-type software.
As far as MFM/RLL drives go I think I may have some of those kicking around somewhere. However the show stopper is probably whether the current buildroot contains the drivers for those old drives.
@Firminator Interesting, although deprecated, it looks like MFM/RLL driver is still available in the current kernel that ShredOS is using, so may well be possible to wipe an old MFM/RLL drive on a 386/486. MFM/RLL drivers 5.13 kernel
No kidding. That's surprising.
Btw Amiga 500/1200 used ST506 drives back in the days, and I think it's still widely spread, so that could be used to run test with the Gutmann method if you can get nwipe/shredos to compile for that platform and boot. Another project....
I do not see any reason to change the name, as my changes are not a new feature or enhancement, they are a correction to a bug in the implementation. They do not add any additional passes (still 35 passes). The present implementation permutes all the passes, whereas Gutmann states to permutee just the deterministic passes (ie. the 27 non-random ones in the middle).
As for how I tested it, I ran it on a loopback device and while each pass was running I checked briefly with a hex editor to confirm it was doing what it's meant to.
The present implementation permutes all the passes, whereas Gutmann states to permutee just the deterministic passes (ie. the 27 non-random ones in the middle).
Thanks for the clarification, I've taken a look at the original code and your changes and can see the issue with the original code in regards to the whole of the array book being permuted rather than just the middle deterministic values.
Nice catch, I'm intrigued as to how you first discovered that bug. Did you just come across it as you were examining the method's code or did you discover it from examining the data that was being written to a loop drive and realised the last pass was not random?
@chkboom I've run checks on the randomisation of the deterministic values, seems to take on average about 200 iterations to fill the pattern table from the book which is fine. I've checked the data that's being written to disc and it's as expected with no repeated deterministic values except of course where they appear twice in the book. Confirmed random passes only happen at the start and end of the wipe. Tested with and without verification on all passes. It all looks good !. thanks, much appreciated.
Nice catch, I'm intrigued as to how you first discovered that bug. Did you just come across it as you were examining the method's code or did you discover it from examining the data that was being written to a loop drive and realised the last pass was not random?
I examined the data with hexedit as the drive was being written, and found something didn't seem right.
https://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html