cfrg / draft-irtf-cfrg-hash-to-curve

Hashing to Elliptic Curves
Other
79 stars 27 forks source link

Test vectors for `hash_to_scalar` #343

Open daxpedda opened 2 years ago

daxpedda commented 2 years ago

Technically hash_to_scalar is not defined, but specified as an "alias" of hash_to_field. Would it still be in scope to provide test vectors for it?

See #301.

kayabaNerve commented 1 year ago

This would be greatly appreciated to ensure implementations are complaint. While EXPAND can be tested on its own, the reduction cannot be.

paulmillr commented 1 year ago

Keep in mind hash_to_field produces numbers is in range [0, p-1].

Scalars, however, most of the time should be [1, p-1], that is, without 0.

There are 2 ways to handle this:

  1. Set p=p-1, generate field element, increment the result by +1
  2. Rejection sampling, which is commonly non-constant-time, and messy

I think the spec should at least mention it.

kwantam commented 1 year ago

For a field of reasonable size, the probability that the result is 0 is negligible. (For example, if p is 128 bits, the probability that you hash to 0 is the same as the probability that you guess someone's AES key on the first try. This will never happen.)

Reducing mod p-1 will, in most instances, require implementing specialized arithmetic mod p-1, which could easily be a large amount of extra code that will need to be audited, maintained, etc. (No production implementation uses generic multi-precision arithmetic. For one, it's almost impossible to make it constant time.)

In sum: in almost all cases, it's much more dangerous to try and prevent hashing to zero than it is to assume that it will never happen. Because it will never happen!

paulmillr commented 1 year ago

@kwantam fault attacks do happen. So it's not negligible, I would think it's pretty likely.

kwantam commented 1 year ago

I'm curious about this security model. You're saying that fault attacks specifically designed to induce a zero output are a serious threat? That's the only kind of fault attack this would prevent against.

(EDIT: to be clear, I do not find this argument persuasive in the least, absent some kind of evidence that zero-inducing fault attacks are a thing.)

paulmillr commented 1 year ago

You're saying that fault attacks specifically designed to induce a zero output are a serious threat

Not serious, but a threat.

I'm simply saying it needs to be kept in mind that this could happen, meaning at least CT rejection sampling should be put in place. Not that it would happen. The chance of a fault attack is much higher than selecting 1 element from all group elements

kwantam commented 1 year ago

I'm not saying fault attacks don't matter. But fault attacks can do lots of different things, and I strongly suspect that among all the fault attacks one has to worry about, fault attacks that are specifically designed to induce zero outputs are a tiny fraction, if they even exist.

Depending on your threat model, you might need to defend against fault attacks or you might not. If you aren't worried about fault attacks, preventing zero outputs from hash_to_field isn't necessary. If you are worried about fault attacks, preventing zero outputs from hash_to_field isn't sufficient.

There is no case where "prevent zero output from hash_to_field" defends against a meaningful threat. But it does add complexity. The cure is worse than the disease.


By the way: if "force hash_to_field output to be zero" is a meaningful fault attack, then the idea of hashing to the range 0..p-1 and then adding 1 is probably not helping because the attacker has just caused you generate the value 1, which is just as bad in most cases.