Closed dis2 closed 6 years ago
This is awesome! I'd love to use your bls12 bindings. Regarding integrating, I'd be willing to write up the curve interface so that this repository supports bls12 if you'd like. (Unless you would like to do it, which would also be great!)
For serializing GT elements, I'm not sure that 4 of the coordinates are zero. Here is some sample output I am getting from different pairings: https://gist.github.com/ValarDragon/4841789998e7bdef0a1927c60b3295ca . I'm not seeing any coordinates re-used. If there is a way to compress the coordinates, it would be great to support it here! If for bls12, 4 of the coordinates aren't being used we can compress them. For now all 12 coordinates can be serialized, and we can improve on the serialization later.
For hashing to the point in G1, currently I haven't done any analysis on what is the best method. I am open to any suggestions if you have them. For now you could just pass a byte array and the intended 64 byte hash to be used. (You can just copy / paste the code from altbn 128). For example hash64([]byte, blake2b.Sum512, <Curve interface>)
Whats being hashed in the loop, is the message array with the counter (as a byte) appended to it. For the 64 byte hash, I am currently setting the first 48 bytes of the hash to be the x value (And then the x coordinate is reduced mod p), and then I use the parity of the 49th byte to determine whether y is positive or negative. The 32 byte hash is currently only intended for cross-compatability with Ethereum for altbn 128.
As far as how I'm envisioning the hashing to the curve in the future, I plan on adding support for other types of hashing (such as elligator squared). In general though, the code will be abstracted such that the only thing that needs to be passed into the hash function is the byte array hash function, the message to hash, along with a helper method which may be defined on the curve.
Indeed you're right, there's recovery going on for GT (zeros are filled in when marshalling, which got me confused). Shouldn't have jumped to conclusions just by looking before writing tests:
https://github.com/dis2/bls12/blob/master/gt_test.go
Specifically this goes on when unmarshalling from compressed GT: https://github.com/relic-toolkit/relic/blob/master/src/fpx/relic_fpx_exp.c#L691
This looks rather heavy, so definitely uncompressed GT needs to be an option, possibly with some high bit like for G1/G2 (in bls12 bindings, everything is compressed by default).
edit: Disregard my ignorant comment about y sign, i now understand the idea is that one bit of entropy. It's not worth the complexity, nor repeated hashing within the loop.
RELIC, as well as most other libraries by looks of it all do something minimal:
// copy the hash into bignum digits. endian is swapped
// ie first byte of hash is highest bits of the number
bn_read_bin(k, digest, MIN(FP_BYTES, MD_LEN));
// if you want to save reduction, just inputhash[0] &= 0x3f before the copy above
fp_prime_conv(p->x, k); // reduce modulo field prime
fp_zero(p->y);
fp_set_dig(p->z, 1);
// now keep trying to find y
while (1) {
ep_rhs(t, p); // t = g1XToYSquared
// take square root
if (fp_srt(p->y, t)) { // y = t^((k + 1)/4), if y^2 == t, then:
// it exists, we're done here
p->norm = 1;
break;
}
// no square root, bump by 1 and try again
fp_add_dig(p->x, p->x, 1);
}
// multiply by cofactor...
The square root will only give you one of the two roots, so you need the parity if you want to get the second possible root. If this isn't done, the hashing to the curve will only ever reach half of the points on the curve, which is undesirable.
This method is described in Short Signatures from the Weil Pairing (The definitional BLS paper to my knowledge), where a security proof of this is included. In this paper, it is called "Map To Group_H", and H in particular is the component that includes this extra parity bit. I'm not really sure why Y should only have the canonical root, since that decreases the range that can be hashed to, making hashed points more differentiable. There is a security proof with using the parity, and it lets one hash onto more points of the curve. Is there a reason that only the canonical form should be used?
Thanks for investigating the GT unmarshalling. You're right we should support GT uncompressed unmarshalling.
If this isn't done, the hashing to the curve will only ever reach half of the points on the curve, which is undesirable.
I'm by no means formally acquainted with this, but here goes: With cofactor 1, I think you'd be correct. But here, the cofactor is large. The point you detect y for is placed in one random subgroup out of 76329603384216526031706109802092473003. After you multiply the point by cofactor later, you'll land in canonical subgroup, sign long determined by wherever "in" the cofactor multiple you were before. IOW, the point is one of 2^256, even if you hashed a 384 bit number (or few bit less, because we stripped sign). Now 128 bits were for all intents and purposes truncated.
This is also why you can do the cheap modulo reduce, it won't matter, because the domain is mapped from large to much smaller one. Unless there's something I'm completely missing about BLS12.
Is there a reason that only the canonical form should be used?
None other than my cargo culting in the end (see the edit).
That's a great point, I hadn't considered that there is a cofactor for G1 in BLS curves. You're right, for the BLS curves there isn't a need to check the parity of y! Thanks for pointing this out to me. I'll make the hashing match the code that is used in RELIC when the cofactor is not equal to 1. (I'll leave a check for when cofactor is one, so that the entire curve is still reached for BN curves)
It seems odd to me for the sqrt implementation to decide the parity of y. It should be decided by the hash.
I agree. I'll update the try-and-increment implementation to have the smaller root be specified when the cofactor is not equal to one. Based on the conversation in ebfull/pairing, I'll also implement Foque-Ticouchi hashing here. I hadn't realized that a decision to use Foque-Ticouchi was already made in that thread. The goal is to make this implementation for hashing to the curve inter-operable with your library on BLS12.
Alright. I'll try to ensure it's well-specified on our end and we can swap some test vectors. We're going with something like this: https://github.com/ebfull/pairing/pull/30#issuecomment-372544500
@dis2 I just integrated your bls12 library in this commit: https://github.com/Project-Arda/bgls/commit/ddc27e060ada7c7d390e7430aa69d375c49b0337 . It looks like all of the bls signature functionalities are passing :)
I'll add separate methods to the curve interface to support Compressed Marshalling. For hashing to the curve, for the purpose of pushing this commit out tonight, I just used try and increment with the changes we were discussing earlier. I'll adapt the implementation of Fouque Tibouchi hashing here to match what is currently being discussed in ebfull's repo. (Currently I just have the Shallue - van de Woestijne encoding implemented here)
@ValarDragon Looking good. By the way, if you have idea about how to refactor the apis to make the wrapper smaller, now's the time. The API is generally shitty as I put stuff there on-demand, with no real thought into it. I have fairly small internal codebase using this I can refactor along with no problem. Later when internal stuff grows bigger, won't be that simple.
You're API looks great to me! Incorporating it into this was pretty straightforward for this. The only other thing I can think of is adding a Check() method in G2 and GT.
Just got Fouque Tibouchi implemented on G1 here. I'll compare the results of my implementation with yours tomorrow @dis2. The relevant code for my implementation is in https://github.com/Project-Arda/bgls/blob/hashing/bls12_381.go#L263 and https://github.com/Project-Arda/bgls/blob/hashing/hash.go#L88 (Methods fouqueTibouchiG1 and sw)
For now, I plan to err on two sides of extreme - fast implementation (fouque + odd/even parity), which is fastest by far (HashToPointFast), and the "official" rust/pairing PR implementation (HashToPoint), which is more or less formally true to the ideas presented in the paper, but quite slow - takes half the time as Pair() itself for G1.
As for your code, few things:
If you want it over GT, things get considerably more tricky though.
Thanks, I hadn't realized that two points were still being summed here, since the cofactor multiplication should still ensure (in principal) that the entire domain is reached. I'll sum them though to match the rust implementation.
Thanks for pointing out Fq2's Legendre can be done faster. After the ebfull/pairing PR specification is finalized, I'll go the same route as you and make a second faster FT hash. (Though I plan on using the rust-compatible implementation for all applications of the library I plan on using)
I don't know of a need to hash to GT for any application of pairing based cryptography, so that shouldn't be a problem for now at least. However this paper suggests that you can hash to GT (page 23).
Thanks, I hadn't realized that two points were still being summed here, since the cofactor multiplication should still ensure (in principal) that the entire domain is reached.
It's a similiar monkey business as me skimping on parity before. It can be done because of domain reduction later, but it introduces small bias from perfectly random distribution.
Just like without parity, you can tell the output number is some output of TS half the time (and thus "not random"), until you blind that with large cofactor. In here you can tell it's output one of the 3 candidate polynomials, since those strip down the domain by almost-bit (5/8 points are found) on their own.
Should one be worried about this? It depends. There's no practical way to exploit 1bit bias after you truncate the domain by 128bits. But with near-impractical 2^128 difficulty one could for certain at least make a guess about what the pre-reduction number was to begin with (and start classifying its bias). This is for one message pair, so the usual birthday paradox tradeoffs apply.
Thanks for creating the go bindings. This repo is matching the hashing to the curve thats specified in ebfull/pairing, so I'm closing this issue. If anything else needs to be done for the bls12 curve interface here, other than hashing to G2 (Which I plan on adding here soon), please let me know!
I've made Go bindings for RELIC at https://github.com/dis2/bls12 which is a far descendant of https://github.com/FiloSottile/powersoftau/tree/master/bls12 . It is loosely API-compatible with bn256.
I've also done small agg sig library using it, but I'd like to integrate with bgls to get a more generic interface I dont have to babysit myself.
For that to happen, I'd like to ask about:
How exactly is hash-to-point in G1 envisioned, specifically how hash should be turned into a bignum before sqrt-to-y-or-bump loop. Would just passing big.Int with starting x (and hashing to it is up to caller) suffice?
How to exactly serialize GT / Fq12 elements. It can be compressed to 8 coordinates if the GT element is part of the Fq12 subgroup - i'm a dilettante, but shouldn't that always be the case in BN/BLS12? But cloudflare bn256 seems to serialize all 12 coords, so I'm scratching my head here. This "compression" is cheap, just simply leaving certain coordinates at 0.
Rust ref impl doesn't state Fq12 format at all, for zksnarks, serializing Fq12 isn't all that exciting.