Closed hwwhww closed 2 years ago
@hwwhww For testing compressing/decompressing of G1/G2 points, the Go API in BLST
does have support for this so its not an issue on our end. Serialization test format looks good to me :+1:
On 1.2 , BLST has removed subgroup checks when decompressing points as of v3 , this was just merged into Prysm here. https://github.com/prysmaticlabs/prysm/pull/7971 . I imagine it would be the same for the other language bindings too.
- 1.1. I naively assumed that the BLST binding does provide these APIs for all supported languages.
The group check is an expensive operation, and it's argued that application is entitled for a choice when to perform it. Primarily because there are situations when it's possible to perform multiple checks in parallel. In other words blst deserialization/uncompression subroutines don't perform group checks, and it's application's responsibility to either make explicit in-group-check call right after deserialization, or pass perform-group-checks-as-you-go flags to higher level procedures and utilize parallelism whenever possible.
Speaking of serialization format in more general terms. As data is deserialized and converted into internal representation format, it implicitly gets modulo reduced. In other words deserialization procedure can handle non-reduced input seamlessly. But it's only natural to assume that if input is not reduced, then somebody is trying to mess with you. For this reason first thing the [blst] deserialaztion procedure does is to ensure that input is fully reduced. However, this is not actually specified, and a formal concern can be and even was raised. In other words, while we are at it, it would be only appropriate to explicitly specify that input is expected to be fully reduced and provide corresponding test vector.
1.1 The C API that we use has support for the decompression/uncompression from Zcash 1.2 We do infinity checks and subgroup checks after decompression and skip those at verification because we do a lot of verification with the same public key so no need to pay an expensive subgroup check every time.
These APIs are all available in Apache Milagro Rust and could trivially be added to milagro_bls
.
With respect to the subgroup checking in lighthouse we do that for PublicKeys at serialization time and Signatures at verification time. So it may not be on by default for lighthouse but can easily be added to tests. So happy eitherway if we are to include or not include the subgroup checks.
From a fuzzing perspective, the existing BLS work (available here) included the 3 MSBs but was only aimed at the compressed version (i.e. MSB0 = 1)
Here is a case from November 2019 where there was an infinity bit set and we didn't check properly that the rest of the bits were 0: https://github.com/status-im/nimbus-eth2/issues/555
This was before we skipped the Ethereum signature for testing so it's from one of the old test vectors.
Close?
https://github.com/ethereum/bls12-381-tests includes decompression tests (cc @asanso). Compression can be done by clients via round-trip tests to confirm internal consistency.
BLS serialization tests
Background
Proposed new test suite
The input and output of our BLS APIs are all in minimal-pubkey-size form (compressed 48-byte pubkey and 96-byte signature). So the functions to test would be:
Discussions
1. Are the APIs available?
2. Did the fuzzing already cover the 3-MSBs edge cases of BLS tests?
/cc @zedt3ster
3. Do you think it would help to reduce the consensus error risks?
/cc @JustinDrake @CarlBeek