dalek-cryptography / ed25519-dalek

Fast and efficient ed25519 signing and verification in Rust.
BSD 3-Clause "New" or "Revised" License
685 stars 227 forks source link

Remove one of the random multiplications in batch verification #55

Open ValarDragon opened 6 years ago

ValarDragon commented 6 years ago

In batch verification, a random linear combination of the equations is taken. (I'm referring to the P_i in page 17 of the ed25519 paper, P_i = 8*R_i + 8*H_i*A_i - 8*S_i*B) The reason for the random linear combination, is to ensure that if one equation wouldn't succeed (i.e. sums to a non-zero value c_i), then there is only one random value z_i such that the sum of all the equations is 0, so with overwhelming probability the sum will be non-zero.

I believe that for batch verification of n signatures, you actually only need to multiple n-1 by a random coefficient. If I understand the code correctly, I think every value is being multiplied by a random coefficient. (Could be wrong, rust is still new to me)

A sketch for a proof for why this is safe: without loss of generality, suppose the 1st signature isn't multiplied by a random number, and every other signature was. There are 4 cases to consider here:

You can essentially avoid 3 scalar multiplications relative to before in batch verification. This may help speed up batch verifications of a low number of signatures. This is a bit of a micro-optimization though, so I'm unsure if its useful to implement.

burdges commented 6 years ago

If this were true, then one should instead use n 64 bit scalars instead of n-1 128 bit scalars, no? I do not know why batching needs 128 bit scalars rather than 64 bit scalars here.

ValarDragon commented 6 years ago

I was being imprecise above, the security threshold isn't 1/|F|, its 1/<number of possible scalars> iirc. (Not sure why I wrote F, my bad!). So the 128 bit scalars come from the desire to want the security level of batch verification to be 128 bits, to match the security of single signature verification. If you used 64 bit scalars, you would only get 64 bits of security. (Since there is probability 1/2^64 of false positive)

burdges commented 6 years ago

That explains nothing. It's actually that if c_1 and c_2 are uniformly distributed b bit random variables then c_1/c_2 has only b+1 bits of entropy, due to cancellations. We might only need 127 bit scalars in the n=2 case though, not sure.

Anyways, there is a probabilistic invalid key attack on your suggestion of taking c_1=1 because \sum_{i>2}^n c_i approaches a rescaled normal distribution as n grows, by the central limit theorem.