Closed PartialVolume closed 4 years ago
It looks pretty good. We do use it in GUI mode. If we can get it to show all drives without scrolling that would be great, or we may need a different screen setup or cut down and not do 14 at a time. I like your idea about -nogui- , I've never tried it that way, but if it shows what your indicating in your email that would work fine. It's important to us to know which drive(s) failed, because otherwise ( if the programs shuts down when we try to scroll ) we need to take a few drives out and run it again trying to figure out which one it is. greg In a message dated 3/11/2020 3:44:15 PM Central Standard Time, notifications@github.com writes:
Nwipes logs can be quite verbose ! While this can be a good thing, it can also be a pain to scroll back up looking at every line to check for errors when your in --nogui mode. This is especially tedious if your using a multi pass method such as DoD with verification and blanking and are wiping fourteen drives simultaneously. Wouldn't it be so much easier if the status of the wipe was summarised so you only needed to glance at the screen to check that all those drives wiped successfully.
This is my proposed summary text that would appear at the very end. Please take a look and if you have any comments please let me know. The summary screen must have a maximum width of 80 characters by 30 lines, so an extra few lines could be added but no more columns.
@gkerr4400
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
If we can get it to show all drives without scrolling that would be great, or we may need a different screen setup or cut down and not do 14 at a time.
I could easily add an option that allows you to run nwipe so that you can see all 14 drives in a 80x26 terminal. Currently one drive occupies 2 lines plus a blank line. I could add a -s --single-line option that has one drive use 1 line instead of 3 without losing any on-screen info. The line format would be something like: [100%] /dev/sdc, Drive Model, Drive S/N, [w-] [70MB/S]
The [w-] means writing and syncing and would change as it was wiping.
[w---] writing [-s--] syncing [--v-] verifying [---b] blanking
You mentioned before about not scrolling due to it crashing out, I'm pretty sure you won't find that's the case with 0.27 onwards. What version of nwipe are you running?
If I remember correctly there was a bug in the code prior to 0.27 where by if you hit any key after all wipes had finished it caused nwipe to exit rather than just the return key causing an exit which is what it should have been. That problem is fixed in 0.27 onwards.
I will check on it and see what version were running.
On Mar 12, 2020, at 4:08 AM, PartialVolume notifications@github.com wrote:
If I remember correctly there was a bug in the code prior to 0.27 where by if you hit any key after all wipes had finished it caused nwipe to exit rather than just the return key causing an exit which is what it should have been. That problem is fixed in 0.27 onwards.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
I checked and we're running version .24, so it's pretty far behind. Right now I'm trying to figure out how to compile the new .28 but I've never done that before so I've got a lot of reading to do. On Thursday, March 12, 2020 PartialVolume reply@reply.github.com wrote:
If I remember correctly there was a bug in the code prior to 0.27 where by if you hit any key after all wipes had finished it caused nwipe to exit rather than just the return key causing an exit which is what it should have been. That problem is fixed in 0.27 onwards.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
Yes, there's been a lot of fixes since 0.24. If your running a Debian based distro like ubuntu etc then just follow the commands here https://github.com/martijnvanbrummelen/nwipe#debian--ubuntu-prerequisites
Also do a
sudo apt install git
(that line is missing from our README)
Download the code to a directory with ..
git clone https://github.com/martijnvanbrummelen/nwipe.git
( that line is missing from our README too)
Then do the compile as per these instructions https://github.com/martijnvanbrummelen/nwipe#compilation
Any problems let me know.
I'll update the README later as its missing a couple of things.
Actually, it would probably be easier if I just sent you a script. What distro are you using ?
I won't say no to the script, it definitely would make it easier in the future. but with your help on the last email I got it. Still not sure what I was doing wrong but all the sudden start working.
Now running .28. 14 drives 1 pass showing 47 hrs 38 minutes, I think it will probably finish quicker though. I'm putting together a Dell t7500 which will be two six core processors with hyperthreading for a total of 24 processor cores should be a little different. I'll let you know how that goes in a day or so. On Thursday, March 12, 2020 PartialVolume reply@reply.github.com wrote:
Actually, it would probably be easier if I just sent you a script. What distro are you using ?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
Ubuntu 19.10 On Thursday, March 12, 2020 PartialVolume reply@reply.github.com wrote:
Actually, it would probably be easier if I just sent you a script. What distro are you using ?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
Here's the script. It will create a directory in your home folder called 'nwipe_master'. It installs all the libraries required to compile the software (build-essential) and all the libraries that nwipe requires (libparted etc). It downloads the latest master copy of nwipe from github. It then compiles the software and then runs the latest nwipe. It doesn't write over the version of nwipe (0.24) that's installed in the repository. To run the new (0.29 release candidate) manually you would run it like this sudo ~/nwipe_master/nwipe/src/nwipe
You can run the script multiple times, the first time it's run it will install all the libraries, subsequent times it will just say the the libraries are upto date. As it always downloads a fresh copy of the nwipe master from Github, you can always stay up to date. Just run it to get the latest version of nwipe. It takes all of 11 seconds on my I7.
As the old 0.24 copy of nwipe is still installed on your system, if you typed nwipe
from any directory it will always run the original Ubuntu 19.10 repository copy of nwipe 0.24. To run the latest nwipe you have to explicitly tell it where the new copy is, e.g in the directory ~/nwipe_master/nwipe/src
. That's why you would run it by typing sudo ~/nwipe_master/nwipe/src/nwipe
alternatively you could cd to the directory and run it like this:
cd ~/nwipe_master/nwipe/src ./nwipe
Note the ./, that means only look in the current directory for nwipe. if you forgot to type ./ the computer would run the old 0.24 nwipe.
Once you have copied the script below into a file called buildnwipe, you need to give the file execute permissions chmod +x buildnwipe
before you can run it.
!/bin/bash
cd "$HOME" nwipe_directory="nwipe_master" mkdir $nwipe_directory cd $nwipe_directory sudo apt install build-essential pkg-config automake libncurses5-dev autotools-dev libparted-dev dmidecode git rm -rf nwipe git clone https://github.com/martijnvanbrummelen/nwipe.git cd "nwipe" ./init.sh ./configure make cd "src" sudo ./nwipe
Apologies, if I've gone into too much detail ! Hope it all makes sense. I've also sent you the buildnwipe script via Google drive. See https://github.com/martijnvanbrummelen/nwipe#automating-the-download-and-compilation-process
Now running .28
Try resizing the terminal window, should look real nice now, unlike 0.24 where half, if not all the windows/banner/footer would have vanished.
I'm putting together a Dell t7500 which will be two six core processors with hyperthreading for a total of 24 processor cores should be a little different. I'll let you know how that goes in a day or so.
Sounds good. Although I've got my doubts about whether it would speed things up dramatically, depending on what your old system was as I think it's probably a I/O bottleneck, not CPU. I could be wrong though !.
On my laptop I always thought nwipe used a lot of CPU until I realised that the widgets that showed CPU usage were summing system+nice+user and I/OWait. So although the widget made it look like the CPU was maxed out in fact it was hardly doing anything while wiping drives. Your better off looking at what 'top' says is using the CPU.
Assuming you have a 8Gigabit Fibrechannel that gives you a maximum bandwidth of 1000Megabyte, divide that by 14 drives gives you 71MB/Sec. It will be interesting to see what the 0.28 version of nwipe tells you what the MB/sec speed is for each drive and also the total MB/sec speed. As long as you have a PCI express or PCI-X their bandwidth as I think that's also about 1000MB/sec. I don't know what drives you've got but I just looked up the Seagate ST3400071FCV and that has a speed to 2Gb (250 Mbyte) so four drives have hit the max bandwidth of a 8 gigabit fibre channel. All this fibre channel stuff is new to me so I could quite easily be talking bs :-)
Interesting article here on pcie bus speeds. In particular the table 'BUS & Theoretical Bandwidth Available' which shows the speeds for different versions of pcie. Looking at the Dell T7500 spec it mentions the expansion bus as being 2 x PCI Express 2.0 and referring back to the article that would be a speed of 8GB/sec which should be a nice match for your HP disk rack. I'm not sure but I think you could actually put 2 x qlogic cards in the computer and wipe 28 drives simultaneously ?
Closed by #223
Finally was able to get to work today.It made no real difference using an i5 or i7 or Dual 6-core xeon on wiping speed ( all using the same 8 gig fiber network ). The rack took slowest -- 26 hrs , 8 min, 24 sec to do - fastest was 26hrs 8 min, 21 sec. I looked and couldn't find a switch that would allow you to autonuke the array and not wipe the boot drive. I know you can check and uncheck the drives , but is there a switch to use in a batch file or script ? Greg
As i suspected. The CPU even an I5 is so much faster than the I/O. The speed bottleneck will be in the pcie bus on the motherboard (as opposed to the CPU) and maybe the qlogic card depending on what it's model/spec is. I was looking at a motherboard that supports faster buses such as pcie 4 for just over £100 ($116). With the right qlogic card you could put two or more qlogic in one PC with two racks. nwipe would handle 28 drives no problem.
Thing is, when I did those calculations I worked out that 4 fibre drives would max out the bandwidth of the 8Gigibit fibre. So to wipe more drives faster you needed more qlogic cards in one PC with a max of 4 drives in each rack.
I don't know what drives you've got but I just looked up the Seagate ST3400071FCV and that has a speed to 2Gb (250 Mbyte) so four drives have hit the max bandwidth of a 8 gigabit fibre channel.
As mentioned above, there may be little point in filling the disk rack with drives, to wipe through the most drives you probably want 4 drives per rack and 4 racks per PC. That may bring the time taken to wipe 14 drives down from 26hrs to 7.5hrs.
The more recent nwipe contain a switch --exclude=/dev/sda etc which will exclude drives. However if you specifically name drives on the command line it only wipes those named drives.
ie. say you have /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde you would type
./nwipe --autonuke /dev/sdc /dev/sdd /dev/sde
it would only wipe the named drives.
if you wanted you could also do ./nwipe --exclude=/dev/sda and all the other drives would show up in the gui, sda would not be listed.
./nwipe --help will give you the full list of options.
Makesure your running nwipe v0.28/0.29
Are you able to tell me the make/model of the PC your using if it's a brand or if it's home build what the model of motherboard is, and the model of qlogic cards you using ? I can then help you figure out maybe what motherboard you need to get to maximise the number of discs you can wipe.
I'm in isolation at the moment due to Covid-19 which I think I came down with, so haven't had the chance to setup the fibre channel gear that I've got as it's at a different location. Should be out of isolation some time next week so looking forward to having a play with that.
You mentioned using in a batch file or script, you would need to use the --nogui option.
./nwipe --nogui --autonuke /dev/sdc dev/sdd etc
I did some measurements with a WD10EZEX-75H SATA 6Gbps drive, on an i5-8400, z370 chipset computer running Fedora 31. This was done with dd if=/dev/zero of=/dev/sda oflag=dsync bs=xxx count=yyy, so is not exactly the same as nwipe. Note that using the physical block size of 4096 as nwipe uses (for drives like this) might be improved with larger writes. (I started with a block size of 512 bytes to satisfy my curiosity of how much penalty there is for the drive’s 512-byte emulation.) I’m sure different controllers & associated drivers will give different results, and the mention of the fiber controller in the previous email is what prompted me to bring this up now. However, despite using dd in this little measurement, its throughput of 135MB/s using a 16,777,216 (16MB) block size is a little faster than nwipe 0.27 gives (90-100MB/s) on the same computer & OS when it uses a 4096 block size. So nwipe is already doing better than dd. And this computer running nwipe currently uses less than 1GB of memory, and around 5% CPU. I’ll note that dd continues to do better with even larger block sizes, and postulate that nwipe might do better as well if multiples of the drive’s physical block size were used, the downside being additional memory use and changes to the size of buffers to be written. But, that is probably less work and architectural change than doing disk buffering in nwipe. I also did a little research and found the /dev/random returns a maximum read of 32MB as of Linux 3.16.
BS COUNT MB/s Run seconds Run hours X faster 512 16,777,216 0.0603 46,799.0 13.00 4,096 2,097,152 0.4890 15,138.7 4.21 32,768 262,144 3.8000 2,278.2 0.63 7.77 262,144 32,768 26.8000 320.0 0.09 54.81 2,097,152 4,096 75.4000 113.9 0.03 154.19 16,777,216 512 135.0000 63.8 0.02 276.07 134,217,728 64 161.0000 53.3 0.01 329.24 1,073,741,824 8 151.0000 56.9 0.02 308.79 8,589,934,592 1 170.0000 12.6 0.00 347.65
dd comparison in table format in github in a browser as per @mdcato results above.
Shows how the size of data you are writing in a per write operation is important in dsync mode.
This is something I'll be looking at in nwipe 0.30. @mdcato I'll reply to your comments above shortly. Optimising nwipes speed is something that's definitely on the radar.
BS | COUNT | MB/s | Run seconds | Run hours | X faster |
---|---|---|---|---|---|
512 | 16,777,216 | 0.0603 | 46,799.0 | 13.00 | |
4,096 | 2,097,152 | 0.4890 | 15,138.7 | 4.21 | |
32,768 | 262,144 | 3.8000 | 2,278.2 | 0.63 | 7.77 |
262,144 | 32,768 | 26.8000 | 320.0 | 0.09 | 54.81 |
2,097,152 | 4,096 | 75.4000 | 113.9 | 0.03 | 154.19 |
16,777,216 | 512 | 135.0000 | 63.8 | 0.02 | 276.07 |
134,217,728 | 64 | 161.0000 | 53.3 | 0.01 | 329.24 |
1,073,741,824 | 8 | 151.0000 | 56.9 | 0.02 | 308.79 |
8,589,934,592 | 1 | 170.0000 | 12.6 | 0.00 | 347.65 |
I'm using a Dell optiplex 9010 for the i5 and i7 processors ( 16 gig ram ). I'm using a Dell precision t9500 for the dual 6-core Xeon ( 32 gig ram ). I've got a 64-bit pci-x card to try in the t9500, but I think that's going to be slower than the pcie card. The vast majority of fiber drives are ibm 17p9905 450gb 15K drives
Get healthy ! We are just heading into it ... Starting to shutdown.
@PartialVolumemailto:notifications@github.com, I coded up an option to do a larger write size in pass.c and did some measurements. As you probably already guessed, there was no (significant) improvement in throughput as the majority of time is spent syncing, not in fewer write’s of larger data. Better to have the data than blindly go down the wrong path.
Thanks @mdcato , quite possibly the only place to optimise to improve the throughout by maybe 5% for a zerofill and as much as 15-20% for a PRNG wipe is to look at that moment in time when you see the hard drive lamp go out momentarily. I.e when we are writing not syncing. We have to be writing and syncing simultaneously which brings me back to some sort of write using the O_DIRECT flag and a dual buffered, threaded write of the PRNG buffer. This is just theory though on my part, I'd need to write some test code to see how much it actually improved speed. As for the periodic sync we do I realise that if we only sync'ed at the end like we used to do that just shifts the delay. In my case that sync has to sync nearly 24GB of CPU disc cache which takes a while and causes nwipe to appear to hang at the end while it's still trying to empty the cache. So I like the more frequent syncing we do now which has other benefits in terms of capturing a failed write early unlike a sync at the end.
So I think the way to go is to work out how do I optimise out that moment in time when the disc stops being written to (while we are writing not syncing). We can't do it using a write and then a sync as we currently do, the only thing is for nwipe to take over the job of sync from the operating system and that can only be done by using the O_DIRECT flag when we open access to the device, combined with two 100MB buffers which are required for a PRNG wipe. While one thread is writing out the contents of one buffer to disc another thread running concurrently is preparing the random data in the second buffer, then they switch.
This is all just theory on my part though, it would need to be tested to see if a double buffer threaded model would actually see the 5-20% gain in throughput I think it should.
Nwipes logs can be quite verbose ! While this can be a good thing, it can also be a pain to scroll back up looking at every line to check for errors when your in --nogui mode. This is especially tedious if your using a multi pass method such as DoD with verification and blanking and are wiping fourteen drives simultaneously. Wouldn't it be so much easier if the status of the wipe was summarised so you only needed to glance at the screen to check that all those drives wiped successfully.
This is my proposed summary text that would appear at the very end of the log on completion of all wipes. Please take a look and if you have any comments please let me know. The summary screen must have a maximum width of 80 characters by 30 lines, so an extra few lines could be added but no more columns.
@gkerr4400