Open Knogle opened 5 months ago
I think so, i've created a 32-bit VM and was able to build the code there. But isn't it i386 then? Not sure tbh.
If you are just building nwipe then yes I would have thought it built as 32bit in the 32bit VM, what does uname -a
return?. Should say i586 or i686 but not x86_64
I don't remember whether you are building ShredOS from source but this is how I build a 32 bit ShredOS including all the applications including nwipe. For testing 32 bit I generally build a 32 bit ShredOS and also build nwipe on a 32 bit distro.
For ShredOS you Just select the correct architecture and variant in menu config and you can build 32 bit on a x86_64 host.
The way I change the architecture and architecture variant is shown below from x86_64 variant nocona
to i386 variant i586
Architecture options being:
ARC
ARC (big endian)
ARM (little endian)
ARM (big endian)
AArch64 (little endian)
AArch64 (big endian)
i386
m68k
Microblaze AXI (little endian)
Microblaze non-AXI (big endian)
MIPS (big endian)
MIPS (little endian)
MIPS64 (big endian)
MIPS64 (little endian)
OpenRISC
PowerPC
PowerPC64 (big endian)
PowerPC64 (little endian)
RISCV
s390x
SuperH
SPARC
SPARC64
x86_64
Xtensa
x86_64 architecture variants are: ( I choose nocona for x86_64 so ShredOS runs on new and old processors back to Pentium 4 64bit ). There were Pentium 4 32bit and Pentium 4 64bit processors. I think all Intel processors before Pentium 4 were 32 bit.
x86-64
x86-64-v2
x86-64-v3
x86-64-v4
nocona
core2
corei7
nehalem
westmere
corei7-avx
sandybridge
ivybridge
haswell
broadwell
skylake
atom
bonnell
silvermont
goldmont
goldmont-plus
tremont
sierraforest
grandridge
knightslanding
knightsmill
skylake-avx512
cannonlake
icelake-client
icelake-server
cascadelake
cooperlake
tigerlake
sapphirerapids
alderlake
rocketlake
graniterapids
graniterapids-d
opteron
opteron w/ SSE3
barcelona
bobcat
jaguar
bulldozer
piledriver
steamroller
excavator
zen
zen 2
zen 3
zen 4
i386 (32 bit) variants: ( I select i586 which allows ShredOS to run on the first Pentium processors and everything after but not earlier processors like 486,386,286.
i486
i586
x1000
i686 pentium pro
pentium MMX
pentium mobile
pentium2
pentium3
pentium4
prescott
nocona
core2
corei7
nehalem
westmere
corei7-avx
sandybridge
ivybridge
core-avx2
haswell
broadwell
skylake
atom
bonnell
silvermont
goldmont
goldmont-plus
tremont
sierraforest
grandridge
knightslanding
knightsmill
skylake-avx512
cannonlake
icelake-client
cascadelake
cooperlake
tigerlake
sapphirerapids
alderlake
rocketlake
graniterapids
graniterapids-d
k6
k6-2
athlon
athlon-4
opteron
opteron w/ SSE3
barcelona
bobcat
jaguar
bulldozer
piledriver
steamroller
excavator
zen
zen 2
zen 3
zen 4
AMD Geode
Via/Cyrix C3 (Samuel/Ezra cores)
Via C3-2 (Nehemiah cores)
IDT Winchip C6
Ahhhh alright, sound's great! Is there a way so i can try to build the current PR (this one) all together with ShredOS to test this?
I've build with Linux debian-i386 6.1.0-23-686-pae #1 SMP PREEMPT_DYNAMIC Debian 6.1.99-1 (2024-07-15) i686 GNU/Linux
At least seems to run OK on i686.
But on these old CPUs without AES-Ni, Fibonacci is a lot lot faster.
Yes, to build ShredOS with your modified version you need to do a release from your fork making sure you set the aes-ctr branch as the target. Then on your local copy of ShredOS You then need to edit a couple of files in packages/nwipe to change the sha1 hash to match your release and change the URL to point to your release. Then just rebuild ShredOS and it will pull in and compile your version of nwipe.
If you're unsure of anything let me know and I'll go into more detail.
Yes, to build ShredOS with your modified version you need to do a release from your fork making sure you set the aes-ctr branch as the target. Then on your local copy of ShredOS You then need to edit a couple of files in packages/nwipe to change the sha1 hash to match your release and change the URL to point to your release. Then just rebuild ShredOS and it will pull in and compile your version of nwipe.
If you're unsure of anything let me know and I'll go into more detail.
Ahh thanks! Doing it this way, it worked really well now. Thanks for that. In the end i still forgot to edit the hash.
Currently wiping 4x 16TB drives in order to put them on eBay, running really well.
Had to double check, but here we had the same issue. Depending on architecture the seed length was different, due to unsigned long
, instead of uint64_t
.
@Knogle I've been thinking about whether to squash these 34 commits to a single commit, however I'm conflicted as to whether it's necessary or not in this case. Your commit comments are informative, however there are a few where you reverse a previous commit so the commit history would be tidier by squashing to a single commit.
I just wondered if you had a preference?
If I did squash the commits to a single commit would you want to do it in git and write the new commit comment for this branch or do you want me to do it in github by doing a merge squash and I write the new commit comment.?
@Knogle I've been thinking about whether to squash these 34 commits to a single commit, however I'm conflicted as to whether it's necessary or not in this case. Your commit comments are informative, however there are a few where you reverse a previous commit so the commit history would be tidier by squashing to a single commit.
I just wondered if you had a preference?
If I did squash the commits to a single commit would you want to do it in git and write the new commit comment for this branch or do you want me to do it in github by doing a merge squash and I write the new commit comment.?
Hey, you could squash them by yourself if that's okay :)
@Knogle I've been thinking about whether to squash these 34 commits to a single commit, however I'm conflicted as to whether it's necessary or not in this case. Your commit comments are informative, however there are a few where you reverse a previous commit so the commit history would be tidier by squashing to a single commit. I just wondered if you had a preference? If I did squash the commits to a single commit would you want to do it in git and write the new commit comment for this branch or do you want me to do it in github by doing a merge squash and I write the new commit comment.?
Hey, you could squash them by yourself if that's okay :)
yes, no problem.
I think what's worth noting is 55472fb0e85ad5b9ea2ae9eed1f1b38f7508db8f, and fe493cfbbd76c3c8e03a7237138690d02682fe1e where AES-CTR is set as default option as well for AES-Ni enabled systems, otherwise falling back to Xoroshiro, and Lagged Fibonacci on i686.
I think what's worth noting is 55472fb, where AES-CTR is set as default option as well for AES-Ni enabled systems, otherwise falling back to Xoroshiro, and Lagged Fibonacci on i686.
Noted, I will read through all the current commit comments and include information I think is important. My comment will probably lean towards being more verbose rather than succinct.
@Knogle Can you also hold fire on producing any new branches based on your existing branch. I'm concerned I'm going to have quite a few merge conflicts to resolve when I come to merging your subsequent branches. So I'd like to get you existing work merged so you can then update your own fork before creating any new branches after all your existing PRs have been merged. Thanks.
@Knogle Can you also hold fire on producing any new branches based on your existing branch. I'm concerned I'm going to have quite a few merge conflicts to resolve when I come to merging your subsequent branches. So I'd like to get you existing work merged so you can then update your own fork before creating any new branches after all your existing PRs have been merged. Thanks.
Sure, i will do so :)
Currently conducting some tests on ARM :) Unfortunately my SD card is massively limiting.
Yes, I've run 0.37 on arm, Ubuntu with xfce desktop on RPI-4 8GB RAM. Configured to boot via USB rather microsd. Seemed to work ok, I've not done any speed tests to see how fast a drive attached via USB will be subject to the drive limitations.
@PartialVolume Maybe a consideration here, I currently have some time left so I can squash the commits. Regarding external libraries there are a few things we could do.
We could instead copy the relevant stuff out of the OpenSSL librarian and include it in our code without external dependency. The OpenSSL licence allows us to do so. Another approach which is more elegant. We include the stable OpenSSL version which works fine for as, as a submodule in the git project by specifying a specific release and tag/commit. This way we can have version locking, and always build with the same OpenSSL version, or altering the version only if we wish to do so. Second approach I have implemented in different projects already, and for different libraries, libmariadb in my case. Where I wanted the code to always function the same way.
Looks like this.
Another approach which is more elegant. We include the stable OpenSSL version which works fine for as, as a submodule in the git project by specifying a specific release and tag/commit. This way we can have version locking, and always build with the same OpenSSL version, or altering the version only if we wish to do so. Second approach I have implemented in different projects already, and for different libraries, libmariadb in my case. Where I wanted the code to always function the same way.
Yes the second approach, that could work for us.
Another approach which is more elegant. We include the stable OpenSSL version which works fine for as, as a submodule in the git project by specifying a specific release and tag/commit. This way we can have version locking, and always build with the same OpenSSL version, or altering the version only if we wish to do so. Second approach I have implemented in different projects already, and for different libraries, libmariadb in my case. Where I wanted the code to always function the same way.
Yes the second approach, that could work for us.
Ahoy, Squashed the commits, and also created a second approach here, using the submodules. https://github.com/martijnvanbrummelen/nwipe/pull/600
In this pull request, I present my implementation of a pseudo-random number generator (PRNG) utilizing the AES-CTR (Advanced Encryption Standard - Counter mode) in 128-bit mode. This implementation is designed to produce high-quality random numbers, which are essential for a wide range of cryptographic applications. By integrating with the OpenSSL library and exploiting AES-NI (Advanced Encryption Standard New Instructions) hardware acceleration when available, I ensure both the security and efficiency of the random number generation process. It provides the highest-quality of PRNGs yet for NWIPE, and is a CSPRNG.
Key Features:
Implementation Details:
This PRNG implementation stands as a robust and efficient tool for generating high-quality pseudo-random numbers, crucial for cryptographic operations, secure communications, and randomized algorithms. The combination of AES-CTR mode, OpenSSL's reliability, and the performance benefits of AES-NI hardware acceleration results in a superior random number generator.
I have ensured that the implementation is well-documented with clear comments, making it accessible for review, understanding, and maintenance, following best practices in both software development and cryptographic standards.
I look forward to receiving feedback on this pull request to further improve and ensure the effectiveness of the PRNG implementation.
Test of randomness:
NIST Test Suite:
SmallCrush Test: