Closed hwwhww closed 2 years ago
In the spec at https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-bls-signature-04#section-2.5
the infinity pubkey is invalid:
From the discussion in https://github.com/cfrg/draft-irtf-cfrg-bls-signature/issues/33#issuecomment-865170648, the next revision of the spec will explicitly mention that KeyValidate
can be cached by doing it only at deserialization (if a key is reused multiple times like validator pubkeys), or before verification (current spec).
I think having eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
invalid will cause the less edge cases to all clients. We can just apply KeyValidate at deserialization. Otherwise special cases need to be applied for aggregation in our libraries.
Regarding future usage of G1_POINT_AT_INFINITY
, in Nimbus we distinguish between the serialized representation of BLS objects which allow both all zero objects and infinity objects (0xc000...) and the deserialized version that is only for spec-compliant cases.
I too like the idea of having it INVALID
since it would fail KeyValidate
, hence we can cache the validation (which lighthouse does).
I think it's also beneficial to fail as early as possible when given malicious / malformed data as it reduces the edgecases and improves performance, thus being able to reject these at deserialisation time is beneficial.
As has been mentioned above having it marked as INVALID
would be the change with the least friction. It does seem that all clients do cache keys at deserialization. Since KeyValidate
checks for G1_POINT_AT_INFINITY
, this can also by applied to
eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
in relation. If we choose to mark it as VALID
, it does introduce some not very nice edge cases in terms of usage with very little benefit.
we may want to use G1 point at infinity to represent the "emptiness" in other cases; similar to how we use
If we do want to represent 'emptiness' , maybe such a check should be done at the application layer rather than adding it to the core BLS library.
@mratsim @kirk-baird @nisdas
Thank you for your responses! I think we have a rough consensus that it's safer to have eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
INVALID. 🙂
@mratsim
From the discussion in https://github.com/cfrg/draft-irtf-cfrg-bls-signature/issues/33#issuecomment-865170648, the next revision of the spec will explicitly mention that
KeyValidate
can be cached by doing it only at deserialization (if a key is reused multiple times like validator pubkeys), or before verification (current spec).
IIUC, (1) the discussion was mainly about subgroup checks, but (2) G1_POINT_AT_INFINITY
can pass subgroup checks anyway so it won't raise an error in deserialization?
Milagro already checks KeyValidate
during deserialization. Does anyone know how blst
handles it now?
IIUC, (1) the discussion was mainly about subgroup checks, but (2)
G1_POINT_AT_INFINITY
can pass subgroup checks anyway so it won't raise an error in deserialization?
Yes it's mainly about subgroup checks but then drifted to KeyValidate / all checks see quote
deserialize
= deserialize + curve check + subgroup check <- default option
deserialize_curve_check
= deserialize + curve check
deserialize_unchecked
= deserialize only presumably people who invokes the second or the third API know what they are doing...Milagro already checks
KeyValidate
during deserialization. Does anyone know howblst
handles it now?
BLST provides the primitives, there is no check by default on deserialization https://github.com/supranational/blst/blob/3f7d97e2/bindings/rust/src/lib.rs#L500-L513
pub fn deserialize(pk_in: &[u8]) -> Result<Self, BLST_ERROR> {
if (pk_in.len() == $pk_ser_size && (pk_in[0] & 0x80) == 0)
|| (pk_in.len() == $pk_comp_size && (pk_in[0] & 0x80) != 0)
{
let mut pk = <$pk_aff>::default();
let err = unsafe { $pk_deser(&mut pk, pk_in.as_ptr()) };
if err != BLST_ERROR::BLST_SUCCESS {
return Err(err);
}
Ok(Self { point: pk })
} else {
Err(BLST_ERROR::BLST_BAD_ENCODING)
}
}
and a validate proc can be called afterwards https://github.com/supranational/blst/blob/3f7d97e2/bindings/rust/src/lib.rs#L443-L459
// key_validate
pub fn validate(&self) -> Result<(), BLST_ERROR> {
unsafe {
if $pk_is_inf(&self.point) {
return Err(BLST_ERROR::BLST_PK_IS_INFINITY);
}
if !$pk_in_group(&self.point) {
return Err(BLST_ERROR::BLST_POINT_NOT_IN_GROUP);
}
}
Ok(())
}
pub fn key_validate(key: &[u8]) -> Result<Self, BLST_ERROR> {
let pk = PublicKey::from_bytes(key)?;
pk.validate()?;
Ok(pk)
}
When aggregating you have a boolean to indicate is validation is needed or was cached https://github.com/supranational/blst/blob/3f7d97e2/bindings/rust/src/lib.rs#L558-L614
// Aggregate
pub fn aggregate(
pks: &[&PublicKey],
pks_validate: bool,
) -> Result<Self, BLST_ERROR> {
if pks.len() == 0 {
return Err(BLST_ERROR::BLST_AGGR_TYPE_MISMATCH);
}
if pks_validate {
pks[0].validate()?;
}
let mut agg_pk = AggregatePublicKey::from_public_key(pks[0]);
for s in pks.iter().skip(1) {
if pks_validate {
s.validate()?;
}
unsafe {
$pk_add_or_dbl_aff(
&mut agg_pk.point,
&agg_pk.point,
&s.point,
);
}
}
Ok(agg_pk)
}
pub fn aggregate_serialized(
pks: &[&[u8]],
pks_validate: bool,
) -> Result<Self, BLST_ERROR> {
// TODO - threading
if pks.len() == 0 {
return Err(BLST_ERROR::BLST_AGGR_TYPE_MISMATCH);
}
let mut pk = if pks_validate {
PublicKey::key_validate(pks[0])?
} else {
PublicKey::from_bytes(pks[0])?
};
let mut agg_pk = AggregatePublicKey::from_public_key(&pk);
for s in pks.iter().skip(1) {
pk = if pks_validate {
PublicKey::key_validate(s)?
} else {
PublicKey::from_bytes(s)?
};
unsafe {
$pk_add_or_dbl_aff(
&mut agg_pk.point,
&agg_pk.point,
&pk.point,
);
}
}
Ok(agg_pk)
}
closing via #2539
I'd like to confirm what's the expected behavior of
eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
call.eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
result?Since (i)
AggregatePKs
is not defined in the IETF standard and (ii)AggregatePKs
functionality is using in Altair, we defined aneth2_aggregate_pubkeys
function to aggregate the pubkeys. That is, aggregate pubkeys logic will be in the consensus layer since Altair HF.Since it's not part of IETF standard, the BLS libs may have implemented the "basic"
AggregatePKs
API differently:G1_POINT_AT_INFINITY = b'\xc0' + b'\x00' * 47
py_ecc._AggregatePKs([G1_POINT_AT_INFINITY])
is VALIDmilagro_bls._AggregatePKs([G1_POINT_AT_INFINITY])
is INVALIDPublicKeys
cannot be the point at infinity in Milagro: https://github.com/sigp/milagro_bls/issues/40#issuecomment-712645746.KeyValidate
so inf point is not in the allowed domain. It's similar to how we handled pubkeys in IETFFastAggregateVerify
.Discussions
eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
test vectors?eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
should never be triggered in Altair since all the possible pubkeys should have passedKeyValidate
.eth2_aggregate_pubkeys
and other Ethereum-specific BLS functions. e.g., we may want to use G1 point at infinity to represent the "emptiness" in other cases; similar to how we useG2_POINT_AT_INFINITY
ineth2_fast_aggregate_verify
.eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
invalid in the test vectors?eth2_fast_aggregate_verify
description or pseudo code./cc @kirk-baird @mratsim @benjaminion @nisdas @wemeetagain @ChihChengLiang 🙏