exaexa / codecrypt

Post-quantum cryptography tool (THIS REPOSITORY IS ONLY A MIRROR OF THE MAIN ONE, PLEASE DO NOT FILE BUGS HERE)
https://gitea.blesmrt.net/exa/codecrypt
GNU Lesser General Public License v3.0
306 stars 41 forks source link

Avoid parameters that allow timing/statistical attacks when used as decryption oracle #18

Closed HulaHoopWhonix closed 6 years ago

HulaHoopWhonix commented 7 years ago

Is the caveat about the crypto not being intended for online use have anything to do with this write-up? Has the author contacted you about this?

https://grocid.net/2016/04/14/a-timing-attack-on-codecrypt/

exaexa commented 7 years ago

I should probably fix the attacks anyway. Well, there are two:

The original timing attack

This one is common to almost all versions of McEliece; if decoding algorithm fails because it's not able to recover all errors, it usually takes a bit different time than when everything goes okay. In case of original McE this kind of failure was a bit faster because the padding etc. was not computed after a failure (it is in codecrypt), so that thing is mostly avoided.

In MDPC case, the decoder is mostly probabilistic, and instead of failing faster, it is left with trying random steps significantly longer. Implications are the same.

Best countermeasure is either to fuzz up the reply timings so that the attacker can't dig anything usuable from his measurements; or create a constant-time decoder. I'm aware of some work in that direction (by DJB and co.), result here: https://www.win.tue.nl/~tchou/qcbits/ . Haven't had enough time to look deeply into it yet.

The key-recovery

Because the probabilistic decoder fails sometimes even on a key with a sub-critical count of errors, when there's a decryption yes/no oracle, attackers are able to forge a set of keys that, upon having them "tested" by the oracle, they can use to actually reconstruct the parity check matrix and effectively the private key. Attack method presented here http://eprint.iacr.org/2016/858.pdf is not very avoidable, as it is basically a scanning through valid ciphertexts. Countermeasures include designing the key in such a way so that decryption does not fail; or fails deterministically on some randomly sampled inputs, so that the attacker cannot determine which failed attempts were caused by actual decoding, and which were intentional.

Conclusion

I supposed that the issues here don't generally affect codecrypt; even a mildly-witted user will get some doubt after clicking "decrypt" button and failing on more than 100 messages. Fixing the issues would be better though.

The constants (block sizes, error counts) used in codecrypt are taken from the original MDPC paper. With some tuning, we can make the numbers such that algorithm fails on "expected" error count only in cryptographically small number of cases, and any higher error count is always caught by CCA2.

Anyway, I'd like to know how they sample CCA2-able ciphertexts in the paper. Codecrypt uses a deterministic CRHF-based derivation of the error pattern, having an error vector with features like "only 1's with d bits in between" and "second half of error vector is zero" is equivalent to almost-preimage attack, or at least to bitcoin mining. If anyone read the paper, would you mind sharing an opinion on what are the exact requirements on the error vectors when they are gathered from the random sampling in the CCA case?

HulaHoopWhonix commented 7 years ago

Really cool stuff. Though I admit that its way over my understanding. I gave the paper a read and I think I know what you're looking for. Let me know if this is it. I first tried reaching out to the authors but judging by their reponse it was a case of RTFM.

End of section 7.2 (page 23) describes the classification procedure. Refers to Algorithm 4 (page 18) that describes "Breaking the CCA security of the converted MDPC scheme" which then points to Algorithm 3 (page 17) thats used to form different subsets of useful error patterns for the CCA case. Algorithm 2 (page 11) is the final step applied for key recovery after Algortihm 4 is used. A few pages down in the analysis section (page 13) they describe the properties of the errors in the special sets.

exaexa commented 7 years ago

Sounds like new parameters are necessary. Gonna be quite a bit of math... :grinning:

ghost commented 7 years ago

What is the difference between your tool and McBits which also uses QC MDPC McEliece ?

exaexa commented 7 years ago

If you refer to the paper from Bernstein&friends [1], it doesn't use MDPC at all, it's based on algebraic Goppa codes (unbroken, but with slightly larger keys). They also provide own padding, much FFT, and some hardware tune-ups to achieve the brutal performance.

I somehow remember that there was some other McBits with MDPC but I've lost the reference to it. (can you share if you have one?)

[1] http://binary.cr.yp.to/mcbits-20130616.pdf

ghost commented 7 years ago

I'm extremely sorry but I was actually referring to QcBits .

exaexa commented 7 years ago

Oh, no problem. QcBits is low-level and achieves constant-time by carrying all operations very carefully and "manually" on the hardware (pretty quickly though, the implementation is very fast). Iteration counts etc. are pre-set. Other than the encoding/decoding primitive, it's the same as McBits (i.e. Niederreiter and poly1305-based padding)

Anyway, they also state that there is a need for better parameter choice. I should really find some time to look into that. :grin:

ghost commented 7 years ago

Mirek , why didn't you use binary goppa codes ?

exaexa commented 7 years ago

QcBits and MDPC papers explain the difference pretty well, please refer to those.

ghost commented 7 years ago

Does current McBits software by Tung Chou have 2^128 security ?

HulaHoopWhonix commented 7 years ago

Hi. A timing attack immune implementation of DJB's McBits has been released:

https://twitter.com/hashbreaker/status/899026849401122816 tungchou.github.io/mcbits/

exaexa commented 7 years ago

@HulaHoopWhonix

The public key size is 1357824 bytes. 😢

HulaHoopWhonix commented 7 years ago

That's rather huge indeed... :-/


Some better (?) news from the lattice PQ crypto world:

The latest release from DJB's crypto factory is a a streamlined version of NTRU-Prime that is faster than the latest version of New Hope while completely eliminating decryption failures.

https://twitter.com/hashbreaker/status/898048057849380864 https://twitter.com/hashbreaker/status/898048506681860096 https://twitter.com/hashbreaker/status/898048760009420801 https://twitter.com/hashbreaker/status/898391210456489984

NTRU is believed to have a higher security margin than New Hope:

https://twitter.com/hashbreaker/status/880086983057526784

exaexa commented 6 years ago

A parameter-tuning workaround is known and scheduled for the next release, I'll work on it when I have some time.

exaexa commented 6 years ago

(Note that this has no security implementations if the user has read and understood the README.)

exaexa commented 6 years ago

Moving off github. Please reopen the issue at the new repository if needed.