prysmaticlabs / prysm

Go implementation of Ethereum proof of stake
https://www.offchainlabs.com
GNU General Public License v3.0
3.47k stars 1k forks source link

Parallelize BLS signature verification #10536

Closed Giulio2002 closed 2 years ago

Giulio2002 commented 2 years ago

💎 Issue

As of now, Prysm is processing BLS signatures without any parallelization, this makes it slow on initial-sync and BLS signatures are the main bottlenecks for Consensus Layer clients.

Background

I understand that optimizing initial sync is not a priority because of weak subjectivity, but still i think it would not be an ugly addition to the mix, as i believe it would strongly diminish initial sync time by a non-significant degree.

Description

What i would do, is the following:

nisdas commented 2 years ago

This is already done here: https://github.com/prysmaticlabs/prysm/blob/develop/crypto/bls/blst/signature.go#L254 https://github.com/supranational/blst/blob/master/bindings/go/blst.go#L512

Signature verification is multi-threaded right now. If you desire to have a larger batch limit then you can specify --block-batch-limit . The reason we do not specify too high a batch size is because it breaks rate limits among other peers.

Giulio2002 commented 2 years ago

is BLS Signature Aggregate verification multi-threaded under the hood? I thought aggregation verification ()is not based on multi-threading but rather on some BLS properties (not a mathematician myself lol). anyway 64 blocks is not enough for any significant improvement with multi-threading, what i meant was not increase the raw amount of blocks we request but rather, accumulate these batches in one big batch we keep internally and once we surpass a certain threshold, aggregate them and verify them at once.

prestonvanloon commented 2 years ago

anyway 64 blocks is not enough for any significant improvement with multi-threading

Could you provide a benchmark to suggest a better number?

Giulio2002 commented 2 years ago

can try doing this

nisdas commented 2 years ago

anyway 64 blocks is not enough for any significant improvement with multi-threading, what i meant was not increase the raw amount of blocks we request but rather, accumulate these batches in one big batch we keep internally and once we surpass a certain threshold, aggregate them and verify them at once.

If you that look at the method under the hood that is exactly what it does. 64 blocks != 64 signatures , there are diminishing returns to having bigger batches simply for the fact that aggregates are heavier in mainnet currently. A block contains many different signature sets to verify : -> Proposer Signature -> Randao Signature -> Sync Contribution Signature -> Aggregate Signatures

The last set of signatures take by far the longest to verify. There are a maximum of 128 aggregates in a block, so you would have to verify a total of 131 signatures per block. With 64 blocks this comes out to 8384 signatures per batch. Having a larger batch will have a limited improvement as the only gain would be in final exponentiation. We would still have to aggregate all public keys for each aggregate before we can verify a single aggregate signature. Now, you can repeat this step for all the other 127 aggregate signatures, and the time to aggregate the pubkeys would be non-trivial. A larger batch wouldn't improve the time taken for these other steps.

rauljordan commented 2 years ago

hi @Giulio2002 does this answer your questions? Let us know if we can still help with anything. If so, we can re-open