After resolving #10 I realized that the biggest bottleneck in the proving time is having to call the deserialize routine on each pubkey (curve point) that was serialized into the keyset file (pre-converted to permissible, but that's not relevant here; it's still one curve point per pubkey).
... now takes about 9.5 seconds, whereas calling deserialize_compressed, which includes extra sanity checks per point deserialization, takes over 100 seconds.
This 90% speedup by definition has solved "most" of the problem, but I am still not satisfied that a linear cost in simply reading in public keys ends up being the overwhelming part of the proof cost; this would mean that (probably), running a proof over a 1M pubkey set will have a 30 seconds cost in just reading the data, which will overwhelm the 2-5 seconds needed to actually run the cryptographic proof.
For now, the situation is acceptable as long as we are not working with keysets larger than a few hundred thousand, but it's very suboptimal.
PS.
Just to check I tried two different Rust syntaxes for reading in the keys, but unsurprisingly it made no noticeable difference in performance:
let leaf_commitments: Vec<Affine<P0>> = pts_bin.iter().map(|x| <Affine<P0>>::deserialize_compressed_unchecked(
&x[..]).expect("Failed to deserialize point")
).collect();
or
let mut leaf_commitments: Vec<Affine<P0>> = Vec::new();
for a in pts_bin.into_iter(){
let x = <Affine<P0>>::deserialize_compressed_unchecked(
&a[..]).expect("Failed to deserialize point");
leaf_commitments.push(x);
}
Closing this for now as there's no concept of how this could be sped up, and current performance is probably OK for most situations. May well re-open in future if there is an idea.
After resolving #10 I realized that the biggest bottleneck in the proving time is having to call the
deserialize
routine on each pubkey (curve point) that was serialized into the keyset file (pre-converted to permissible, but that's not relevant here; it's still one curve point per pubkey).The deserialization call here:
https://github.com/AdamISZ/aut-ct/blob/748f3005c39a6f9478fe77c1b2a0bc7fcc84eebc/src/utils.rs#L137
... now takes about 9.5 seconds, whereas calling
deserialize_compressed
, which includes extra sanity checks per point deserialization, takes over 100 seconds.This 90% speedup by definition has solved "most" of the problem, but I am still not satisfied that a linear cost in simply reading in public keys ends up being the overwhelming part of the proof cost; this would mean that (probably), running a proof over a 1M pubkey set will have a 30 seconds cost in just reading the data, which will overwhelm the 2-5 seconds needed to actually run the cryptographic proof.
For now, the situation is acceptable as long as we are not working with keysets larger than a few hundred thousand, but it's very suboptimal.
PS.
Just to check I tried two different Rust syntaxes for reading in the keys, but unsurprisingly it made no noticeable difference in performance:
or