-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Determine eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
result
#2538
Comments
In the spec at https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-bls-signature-04#section-2.5 From the discussion in cfrg/draft-irtf-cfrg-bls-signature#33 (comment), the next revision of the spec will explicitly mention that I think having Regarding future usage of |
I too like the idea of having it I think it's also beneficial to fail as early as possible when given malicious / malformed data as it reduces the edgecases and improves performance, thus being able to reject these at deserialisation time is beneficial. |
As has been mentioned above having it marked as
If we do want to represent 'emptiness' , maybe such a check should be done at the application layer rather than adding it to the core BLS library. |
@mratsim @kirk-baird @nisdas
IIUC, (1) the discussion was mainly about subgroup checks, but (2) Milagro already checks |
Yes it's mainly about subgroup checks but then drifted to KeyValidate / all checks see quote
BLST provides the primitives, there is no check by default on deserialization https://github.com/supranational/blst/blob/3f7d97e2/bindings/rust/src/lib.rs#L500-L513 pub fn deserialize(pk_in: &[u8]) -> Result<Self, BLST_ERROR> {
if (pk_in.len() == $pk_ser_size && (pk_in[0] & 0x80) == 0)
|| (pk_in.len() == $pk_comp_size && (pk_in[0] & 0x80) != 0)
{
let mut pk = <$pk_aff>::default();
let err = unsafe { $pk_deser(&mut pk, pk_in.as_ptr()) };
if err != BLST_ERROR::BLST_SUCCESS {
return Err(err);
}
Ok(Self { point: pk })
} else {
Err(BLST_ERROR::BLST_BAD_ENCODING)
}
} and a validate proc can be called afterwards https://github.com/supranational/blst/blob/3f7d97e2/bindings/rust/src/lib.rs#L443-L459 // key_validate
pub fn validate(&self) -> Result<(), BLST_ERROR> {
unsafe {
if $pk_is_inf(&self.point) {
return Err(BLST_ERROR::BLST_PK_IS_INFINITY);
}
if !$pk_in_group(&self.point) {
return Err(BLST_ERROR::BLST_POINT_NOT_IN_GROUP);
}
}
Ok(())
}
pub fn key_validate(key: &[u8]) -> Result<Self, BLST_ERROR> {
let pk = PublicKey::from_bytes(key)?;
pk.validate()?;
Ok(pk)
} When aggregating you have a boolean to indicate is validation is needed or was cached https://github.com/supranational/blst/blob/3f7d97e2/bindings/rust/src/lib.rs#L558-L614 // Aggregate
pub fn aggregate(
pks: &[&PublicKey],
pks_validate: bool,
) -> Result<Self, BLST_ERROR> {
if pks.len() == 0 {
return Err(BLST_ERROR::BLST_AGGR_TYPE_MISMATCH);
}
if pks_validate {
pks[0].validate()?;
}
let mut agg_pk = AggregatePublicKey::from_public_key(pks[0]);
for s in pks.iter().skip(1) {
if pks_validate {
s.validate()?;
}
unsafe {
$pk_add_or_dbl_aff(
&mut agg_pk.point,
&agg_pk.point,
&s.point,
);
}
}
Ok(agg_pk)
}
pub fn aggregate_serialized(
pks: &[&[u8]],
pks_validate: bool,
) -> Result<Self, BLST_ERROR> {
// TODO - threading
if pks.len() == 0 {
return Err(BLST_ERROR::BLST_AGGR_TYPE_MISMATCH);
}
let mut pk = if pks_validate {
PublicKey::key_validate(pks[0])?
} else {
PublicKey::from_bytes(pks[0])?
};
let mut agg_pk = AggregatePublicKey::from_public_key(&pk);
for s in pks.iter().skip(1) {
pk = if pks_validate {
PublicKey::key_validate(s)?
} else {
PublicKey::from_bytes(s)?
};
unsafe {
$pk_add_or_dbl_aff(
&mut agg_pk.point,
&agg_pk.point,
&pk.point,
);
}
}
Ok(agg_pk)
} |
closing via #2539 |
I'd like to confirm what's the expected behavior of
eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
call.eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
result?Since (i)
AggregatePKs
is not defined in the IETF standard and (ii)AggregatePKs
functionality is using in Altair, we defined aneth2_aggregate_pubkeys
function to aggregate the pubkeys. That is, aggregate pubkeys logic will be in the consensus layer since Altair HF.Since it's not part of IETF standard, the BLS libs may have implemented the "basic"
AggregatePKs
API differently:G1_POINT_AT_INFINITY = b'\xc0' + b'\x00' * 47
py_ecc._AggregatePKs([G1_POINT_AT_INFINITY])
is VALIDmilagro_bls._AggregatePKs([G1_POINT_AT_INFINITY])
is INVALIDPublicKeys
cannot be the point at infinity in Milagro: Should we validate the pubkeys parameters offast_aggregate_verify
? sigp/milagro_bls#40 (comment).KeyValidate
so inf point is not in the allowed domain. It's similar to how we handled pubkeys in IETFFastAggregateVerify
.Discussions
eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
test vectors?eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
should never be triggered in Altair since all the possible pubkeys should have passedKeyValidate
.eth2_aggregate_pubkeys
and other Ethereum-specific BLS functions. e.g., we may want to use G1 point at infinity to represent the "emptiness" in other cases; similar to how we useG2_POINT_AT_INFINITY
ineth2_fast_aggregate_verify
.eth2_aggregate_pubkeys([G1_POINT_AT_INFINITY])
invalid in the test vectors?eth2_fast_aggregate_verify
description or pseudo code./cc @kirk-baird @mratsim @benjaminion @nisdas @wemeetagain @ChihChengLiang 🙏
The text was updated successfully, but these errors were encountered: