cfrg / draft-irtf-cfrg-bls-signature

32 stars 16 forks source link

aggregate signatures and subgroup validation #33

Open kwantam opened 4 years ago

kwantam commented 4 years ago

Hi folks (@hoeteck @sergeynog @zhenfeizhang):

I have a concern about subgroup checks in the aggregate signature case.

Unless I'm crazy, this is a problem, and I think the fix is to check the subgroup of each signature before summing in Aggregate. Does this seem correct?

(Sorry to bear bad news...)

(By the way, a question from @dot-asm turned this up. Thank you!)

kwantam commented 4 years ago

I'm guessing @burdges has thought about this corner case. Thoughts?

hoeteck commented 4 years ago

I guess it depends on what the security requirements are for aggregate signatures.

kwantam commented 4 years ago

Not even clear to me that I need to know the secret keys. For example, if I see signatures s_a on m_a and s_b on m_b, then I can pick a low-order point, add it to s_a, subtract it from s_b, and now I've got two "fresh" signatures on m_a and m_b that verify when run through Aggregate. So as long as I can predict that two signatures will be aggregated, I can break uniqueness. Isn't this already enough of a problem?

burdges commented 4 years ago

You might break soundness for someone's blind signature deployment doing do, although blind signatures should do double spending protections on the messages, not the signatures.

Among rust crates, I think zcash has firmly established doing subgroup checks when deserializing group elements. I think zexe/arkworks added some non-cannonical serialization for passage around within the same machine. If folks wish to skip subgroup checks somewhere within one system, then maybe they should do so using methods explicitly marked "not wire safe".

There might exist reasons for internally using curve points not inside the subgroup: If you accumulate (h + sum_i b[i] pk[i]) - h inside a SNARK using only two-three constraint Affine addition, where h is hashed-to-coset so intermediate sums never equal the identity. In this, you depend upon the pk[i] living inside the subgroup however and you'd never do this without proofs-of-possession, so h being inside a coset is way overkill, but maybe similar reasons exist.

hoeteck commented 4 years ago

Not even clear to me that I need to know the secret keys. For example, if I see signatures s_a on m_a and s_b on m_b, then I can pick a low-order point, add it to s_a, subtract it from s_b, and now I've got two "fresh" signatures on m_a and m_b that verify when run through Aggregate. So as long as I can predict that two signatures will be aggregated, I can break uniqueness. Isn't this already enough of a problem?

If I understand the threat model correctly, then having Aggregate check subgroup membership doesn't solve the problem. The "mauled" signatures could be (s_a + c, s_b - c) for any c, not necessarily a low-order point.

kwantam commented 4 years ago

If I understand the threat model correctly, then having Aggregate check subgroup membership doesn't solve the problem. The "mauled" signatures could be (s_a + c, s_b - c) for any c, not necessarily a low-order point.

Good point.

On the other hand, this sort of thing would be prevented whp by aggregating a random linear combination of signatures (not something the document describes, but it seems like folks are thinking about doing this), whereas that approach doesn't detect whp for c in a small subgroup since the probability that the randomizer is congruent to 0 mod the subgroup order is inversely proportional to subgroup order.

At the least, it seems like we should discuss this in Security Considerations, right? And probably it would also make sense to strongly recommend just doing subgroup checks as part of deserialization.

Is there something else we should be recommending in addition or instead?

hoeteck commented 4 years ago

Yes, having a discussion in Security Considerations sounds good. Thanks for noticing this subtlety!

kwantam commented 4 years ago

Actually, would it make sense to just define the deserialization process to include subgroup checks? That seems like what a lot of people are already doing (as @burdges points out), it would let us get rid of explicit subgroup checks in a few places in the pseudocode, and it would make it very unlikely that conforming implementations fall to attacks involving low-order points.

On the other hand, in principle that could eliminate certain optimization opportunities (maybe not so bad). Should we worry about what might happen to someone who misses that detail in the deseralization routine and ends up with no subgroup checks anywhere? Not sure how paranoid we can get about people selectively omitting parts of the specification, though.

burdges commented 4 years ago

Should we worry about what might happen to someone who misses that detail in the deseralization routine and ends up with no subgroup checks anywhere? Not sure how paranoid we can get about people selectively omitting parts of the specification, though.

¯\(ツ)

You could remind them briefly deserialization enforces important constraints wherever you do it now.

I'd assume deserialization enables more optimization than it obstructs, although not really sure.

Another question is: If you use deserialization then what do you assume from language type systems? Or how many people really love byte oriented interfaces for specs?

You could also just suggest a little "deserialization boundary line" into your existing stuff that presumably provides a byte oriented interface.

asanso commented 3 years ago

@kwantam is this issue taken into consideration for the next version release?

cc @burdges @JustinDrake

kwantam commented 3 years ago

Yes, we'll try to address this in the next update.

zhenfeizhang commented 3 years ago

Actually, would it make sense to just define the deserialization process to include subgroup checks? That seems like what a lot of people are already doing (as @burdges points out), it would let us get rid of explicit subgroup checks in a few places in the pseudocode, and it would make it very unlikely that conforming implementations fall to attacks involving low-order points.

On the other hand, in principle that could eliminate certain optimization opportunities (maybe not so bad). Should we worry about what might happen to someone who misses that detail in the deseralization routine and ends up with no subgroup checks anywhere? Not sure how paranoid we can get about people selectively omitting parts of the specification, though.

I think having subgroup check on deserialization by default is good practice. I also want to propose separate APIs to allow people to by pass the checks. Concretely, somewhat borrowing ideas from arkworks:

Speaking from my own experience, and this is kind of irrelevant to BLS signature per se, I do think that by-passing those checks are quite important for performance critical applications.

dot-asm commented 3 years ago

propose separate APIs

I'd argue that it's a mistake to view this document as an actual API specification. Conversely it would be inappropriate to word it as if it is one. It should describe algorithms, not interfaces. Otherwise, where would you draw the line? At internal data representation? At multi-threading?

dot-asm commented 3 years ago

On a potentially more constructive note. Formally speaking, what is essential is to stipulate that point data that crossed non-trusted boundary, most notably received over the network, have to be subgroup-checked. But it's not unreasonable to leave the decision exactly when to the application. I understand the concern that developers can be sloppy, yet, should you actually operate under assumption that they are? The best you can do is to spell out things in a way that would shift the burden of holding developers accountable to developer community or users.

Just in case, on a practical note. The thing is that subgroup-check is not exactly a cheap operation, ~1/5 of the Miller loop, and it's not unreasonable to leave room for "maneuver." For example, an application can cache the results of subgroup checks while keeping data serialized as platform-neutral format, which would make repetitive checks excessive. Even if the results are not cached, it might be appropriate to postpone the check, say, till the parallel phase...

zhenfeizhang commented 3 years ago

propose separate APIs

I'd argue that it's a mistake to view this document as an actual API specification. Conversely it would be inappropriate to word it as if it is one. It should describe algorithms, not interfaces. Otherwise, where would you draw the line? At internal data representation? At multi-threading?

I agree with your point. I also noted that "this is kind of irrelevant to BLS signature per se". And my reasoning of skipping the checks is exactly as you pointed out: the subgroup checks are not cheap, and there are applications where people may want to skip the checks.

I guess I should lay out the exact use case that I had. I was building a zkp system and the CRS is around 200 MB, full of serialized group elements. It makes sense to bypass the subgroup checks, since the CRS is already verified through its hash checksum. While if we do deserialization with subgroup check, the deserialization itself is somewhat more costly than proof generation.

While I am writing this, I realized that indeed this is irrelevant to BLS signature. So I take that suggestion back.

OTOH, I still lean towards enforcing subgroup checks upon deserialization. My reason is that

dot-asm commented 3 years ago

But "enforcing subgroup checks upon deserialization" is wording it as if the document is an API specification...

zhenfeizhang commented 3 years ago

I guess this draft did say that

The remainder of this section defines terminology and the high-level API.

So it really depends on the wording is high level or not. I made a PR in this attempt. Would love to hear your thoughts.

dot-asm commented 3 years ago

Would love to hear your thoughts.

I feel that I can only repeat myself at this point. But since you asked:-), to be overly specific, #44 goes against everything I suggest. I view the reference to "the remainder ... defines ... API" as a bit disingenuous, because by its definition "draft" is not something that is chiseled in stone, and it should apply to all parts equally. Well, we naturally can't make it about something else entirely, but general wordings like the referred one should be subject to equal scrutiny. Or in other words I'd would rather suggest to replace reference to "API" with "algorithm description."

dot-asm commented 3 years ago

After some reflection I'd like to reiterate a point. While the draft does mention "API" at multiple occasions, I have to admit that over time it actually slipped my mind. Question is why? I have to draw the conclusion that this is because I've found the descriptions inadequate as actual API specifications and treated them for what they are in spirit, algorithm descriptions. To give a couple of examples...

KeyValidate reads:

   1. xP = pubkey_to_point(PK)
   2. If xP is INVALID, return INVALID

Is the implementer obliged to arrange xP to be a structure with additional field to denote its validity? Alternatively would the implementer be forced to dynamically allocate memory and return NULL to indicate that xP is invalid? Would not doing either of these render implementation non-compliant? Obviously not. It's the spirit that counts, "points that fail to deserialize shall not pass."

CoreVerify reads:

   4. If KeyValidate(PK) is INVALID, return INVALID
   5. xP = pubkey_to_point(PK)

Is implementer actually obliged to make two calls to pubkey_to_point? Obviously not. Again, it's the spirit that counts...

dot-asm commented 3 years ago

I feel that I have to apologize for driving this discussion effectively off topic. I mean the initial question effectively was "are there circumstances when one actually can omit subgroup checks on individual elements?" Answer to the question appears to be "no." I suppose I should have replied in the context of #44... Sorry!