Closed real-or-random closed 4 years ago
I don't know if we care about this ~5us speedup at signing time (because when you do care, perhaps you want signing with precomputed public key data, which avoids this cost in both cases), but there seems to be basically no gain apart from consistency with R (which is arguably very different, as handling of R is generally limited in scope to the signing code, while public keys are obviously exposed to applications (and things like the taproot tweak break the perfect blackboxy ness of them).
I think the question is not so much whether we care. We care about the details, and here it seems to me that switching to oddness is faster and it's basically for free because there is no point in consistency.
@roconnor-blockstream pointed on IRC that the even-tiebreaker also adds a normalization step to the verifier.
careful with too much public key consistency. That's how you people blowing their security by using the same key/nonce with ecdsa and schnorr, or burning their coins by writing uncompressed pubkeys into the output. :P
I don't think this one matters too much one way or another. It's nice if the flagged public key can just be an ordinary 'compressed' public key.
Looks like 'oddness' (yuck) for the pubkey would just plain be faster because jacobi is slower than a multiply, unlike for r and we get the square and inverse for free as a product of the x projection to affine. Even if you assume that the pubkey is going to be precomputed in signing (as it should be), that precomputation would still be faster.
@sipa and half a negate.
So, I think we should change BIP340 to use implicit-even as tiebreaker. To avoid inconsistently breaking potential code people may have written already, we should probably also change the challenge hash tag. How about "Schnorr/secp256k1" or so?
This also means changing BIP341 to change the flag in the control block to be the odd/even bit of Q, rather than its squaredness.
Implementation-wise, I wonder if this means we can't just drop a bunch of the x-only code, and just use the existing pubkey tweaking functions in the consensus verifier.
I share the slight preference for implicit-even as tiebreaker because it simplifies the scheme (by avoiding the jacobi symbol), despite the additional verification branch. Also, implementation-wise this makes the full/auxiliary pubkey type identical to the existing one and there's no need for a tiebreaker flag.
As for compatibility between auxiliary/flagged and compressed pubkeys, it seems like compressed pubkeys are already compatible with xonly pubkeys. An adversary can easily convert between the two. At least I'm not able to come up with a non-adverserial scenario or practical attack where this matters.
I think the only reason squareness was chosen over evenness was because we'd already specced has_square_y(P)
and P = lift_x(x)
conversion functions for dealing with R
. Switching to evenness sounds like a good tradeoff if it makes implementation/use a bit easier.
Fixed by #192.
Splitting discussions out. This one about the performance impact on signing, without precomputed public key data.
With squareness tie-breaker, signing needs to compute (
privkey*G
), compute its affine X coordinate (=computing inv(Z), squaring it, and multiplying with X), compute the squaredness of its Y coordinate (=computing Y*Z, and jacobi of it), normalize X, and then conditionally negate the privkey.With oddness tie-breaker, signing needs to compute (
privkey*G
), compute its affine X coordinate (=computing inv(Z), squaring it, and multiplying with X), compute the affine Y coordinate (=computing 1/Z^3, and multiplying with Y), normalize X, normalize Y, and then conditionally negate the privkey.The current PR does one unnecessary multiplication, as it's unnecessarily computing the affine Y coordinate. After fixing that, the speed gain from going from square to even tiebreaker is trading 1 field multiplication and normalization for 1 jacobi computation - which is dominated by the jacobi symbol.
I don't know if we care about this ~5us speedup at signing time (because when you do care, perhaps you want signing with precomputed public key data, which avoids this cost in both cases), but there seems to be basically no gain apart from consistency with R (which is arguably very different, as handling of R is generally limited in scope to the signing code, while public keys are obviously exposed to applications (and things like the taproot tweak break the perfect blackboxy ness of them).
Originally posted by @sipa in https://github.com/bitcoin-core/secp256k1/pull/558