zkcrypto / pairing

Pairing-friendly elliptic curve library.
Other
326 stars 117 forks source link

Hashing to G1/G2/Fr/Fq/Fq2 #56

Open ebfull opened 6 years ago

ebfull commented 6 years ago

Hashing to G1/G2/Fr/Fq/Fq2 -- with total domain separation.

PlaneB commented 5 years ago

Any news on the hashing function? If they exist pairing can also be used for BLS signatures and multisignatures. Not just for zk-SNARKS based applications.

afck commented 5 years ago

I agree, that would be really helpful. In the threshold_crypto crate, we actually are using pairing for BLS signatures. We created the hash functions by initializing a random number generator with the hash, and then using it to create a "random" group element: https://github.com/poanetwork/threshold_crypto/blob/c2d63b214a32378f8d5644757e1091b16775e84e/src/lib.rs#L597

kwantam commented 5 years ago

If there's any interest, I have a Rust implementation of our work on fast hashing to BLS12-381 (WB19) based on (a slightly modified version of) this crate and following the draft BLS signatures standard.

The hash functionality currently lives in its own crate (see the rust-impl subdir), but I could put together a PR that adds the functionality to this crate, if that would be useful.

burdges commented 5 years ago

Is the IETF draft meant to propose the hashing-to-the-curve from the Wahby-Boneh? It reads like it specifies nothing but try-and-increment?

In any case, the IETF draft covers only hashing-to-G1, which sounds insufficient anytime you actually want BLS signatures, but presumably your implementation fixes this? Or maybe the hashing-to-G2 is far slower than I realize?

A priori, I'd think any general purpose BLS signature implementation should be generic over the curve roles, like https://github.com/w3f/bls/blob/master/src/lib.rs#L164 In that crate, I also concluded a BLS signature scheme should either handle only 32 bytes messages internally, or else unpack Blake2b to operate on raw ChaCha states, so that verification algorithms could operate more efficiently https://github.com/w3f/bls/blob/master/src/verifiers.rs#L72 That'd simplify the spec slightly but make efficient implementations harder.

If I understand, the concern about the Fouque-Tibouchi as implemented in https://github.com/zkcrypto/pairing/pull/30 boils down to hashing not being constant-time, which I suppose never matters in BLS signatures. Or are there more concerns like performance there?

kwantam commented 5 years ago

Is the IETF draft proposing the hashing-to-the-curve from the Wahby-Boneh? It read like it specifies nothing but try-and-increment?

I think the latest published version on the IETF page is badly out of date at this point. The latest working version is here. It covers hashing to both G1 and G2---both will be specified as separate ciphersuites. And I think it's safe to say that try-and-increment is out of the picture at this point.

There are two related concerns about FT12:

  1. if you want a constant-time implementation, it's likely to be somewhat slow. In particular, it costs effectively three exponentiations.

  2. if you want it to be fast, you need some low-level arithmetic routines that you wouldn't otherwise need to implement for curve ops. Specifically, to avoid computing multiple exponentiations one needs to implement fast Legendre symbol computations, which require arbitrary modular reductions rather than just field arithmetic.

In contrast, WB19 is constant time essentially for free (though, as you say, this doesn't matter that much for BLS signatures in particular), it costs exactly one exponentiation (which is inescapable, since one must compute a square root mod p), and fast implementations only need field arithmetic.

That said, I don't speak for the standards folks, but my guess is that some variant of Shallue--van de Woestijne 2006 (either FT12 or something very close) will also be specified for certain ciphersuites.

mmaker commented 4 years ago

Hello there! I'm happy to see that the hashing issue has been taken back into consideration!

I just wanted to remind you guys that in #30 I had a pull request exactly on this, and in #102 I had a fast cofactor multiplication implementation; in the Algorand implementation I'm not sure what's the purpose of having clear_h and a clear_cofactor around (without BP18) but I'm happy to work on merging the three together and use WB19 instead of FT12!

Plus I'm sketching Budroni-Pintore for clearing the cofactor in the RFC draft, which is much more concise than what's currently in Algorand, and it'd be nice to work on another implementation to have concrete benchmarks.

EDIT: I slightly changed the tone in a previous sentence, which reading back didn't seem very appropriate (sorry) and I updated the pull request for the cofactor multiplication in https://github.com/zkcrypto/bls12_381/pull/18. Hope you'll find it useful!

kwantam commented 4 years ago

The Algorand implementation intentionally avoids using endomorphisms because of the GLV patent (which expires outside the US in December, and in the US next September).

What's implemented there is compatible with Budroni and Pintore. When the patent expires, one assumes they will bring back the faster method.

mmaker commented 4 years ago

@kwantam I see, thanks for explaining!

burdges commented 4 years ago

Interesting, I suppose any IRTF calls for adoption should either leave this part as an implementation detail internal to a subgroup check, or else wait until patent expiration then too?

kwantam commented 4 years ago

@burdges the strategy right now is the former: it's an implementation detail.

In hash-to-curve, suites specify h_eff, which is a scalar such that multiplying an element by that scalar yields a scalar in the desired subgroup. Obviously h_eff = h works, but in the case of fast cofactor clearing algorithms generally we have h_eff != h. Specifying it as a scalar means that we don't have to require people to use the endomorphism, but still ensure compatibility.

For subgroup checks, BLS just says you have to do it, not how. One can always do it via scalar multiplication by the group order; people can optimize if they want to, but it doesn't change compatibility.

Fortunately, this won't be a problem much longer!