Open VictorTaelin opened 2 years ago
This is the first time I have heard of it, I will check it out. At a quick glance... it compares its speed to GHC? Being faster than Haskell's GHC is not a difficult task... Welp. OCaml is great, I think it is multi-threaded now? In any case, what would you like guidelines on exactly? Generally, there is a famous book named Cryptography Engineering
by Bruce Schneier, Niels Ferguson, and Tadayoshi Kohno. It has a chapter named Implementation Issues
. That is definitely worth a read. It will explain you why exactly my next statement is correct.
Anything JavaScript is sub-optimal and outright absurd in this area, in my humble opinion. I think Ada/SPARK is the best we have so far with its formal verification and whatnot. I do not want to dig into it but here is a PDF about Ada/SPARK that is still relevant today, although a bit dated. If you want new stuff, just check the AdaCore blog. You can do simple tasks with Ada/SPARK such as proving that images will always be successfully encoded!
Speaking of massively parallel
... I recently read a lot about parallelized hashing via j-Lanes and j-Pointers tree modes with applications to SHA-256. It provides us parallelization opportunities in a hashing process that is otherwise serial by nature, which results in having a performance advantage on modern processor architectures. It is definitely not new, but worth looking into.
FWIW I am not shilling Ada (perhaps a bit); they are actually supporting Rust. For example take a look at this. It seems like Ada and Rust are starting to grow on each other in this space. :)
By "Blake", did you really mean BLAKE? As in, BLAKE1?
Good luck, by the way!
Is there any guideline on how to do so safely? Any reference implementation or pseudocode which is parametric on the word size?
No, changing the word size would be more like designing a brand new hash function than a simple parameter change. Here are some of the issues that you'd need to consider if you wanted to do this, off the top of my head. I'm sure the other authors can point out more that I've missed:
So in short, there are some technical and security-sensitive issues that come up if you try to change the word size, and beyond that some issues where moving away from the 32-bit "sweet spot" leads to some awkwardness. If you're in the business of designing new hash functions, it could be an interesting project. But if you know that's not the business you want to be in, then I don't think it's feasible.
@odiferousmint It is actually very hard to beat GHC; otherwise it wouldn't be the fastest lazy functional compiler in the world! Also, I meant BLAKE3 (it is the latest revision, right?). Regardless, thanks for the comments. Interesting points overall.
@oconnor663 Thanks, extremely good answer. Yes, I'm definitely not in the business of designing new hash functions! If some existing hash function had a parametric word size, it would probably be a great choice for HVM (and an output of 240 bits, key sizes of 480 bits, etc. would be expected). But I get that modifying BLAKE3 with that in mind is clearly not a trivial task; bad luck.
HVM has fast 60-bit native words, and slightly slower 64-bit boxed words. A close example would be Ocaml, which has unboxed 63-bit numbers, and only boxed 64-bit numbers. We're looking for the fastest hash function on HVM. Keccak is great, but it uses 64-bit words on its internal states, so, our best bet, so far, is to port it from implementations that use 32-bit numbers, such as JavaScript. That would be sub-optimal. What about Blake? I wonder if it would be possible to implement it using 60-bit words in its internal states, without having to emulate 64-bit words. It would be a different function, but that is fine. Is there any guideline on how to do so safely? Any reference implementation or pseudocode which is parametric on the word size?