Open Nashtare opened 1 month ago
We will refactor proving concurrency logic for the https://github.com/0xPolygonZero/zk_evm/issues/627. Then we can rethink about this limitation.
I think we'll soon need some actual benchmarking criteria in CI, because the default parameters here are impacting negatively perfs, and we'd ideally not introduce any noticeable regression in develop
/ main
.
@atanmarko Just as a heads-up, if we can't address it prior to the next release, I'll bump the default pool size to a larger one to prevent any perf regression.
@Nashtare I plan to start with #627 end of this week or next week
I've noticed some kind of block concurrency issues in limited environments (i.e. no unlimited horizontal scaling), which can have some non-negligible performance impact overall. It may be due to #600.
I proved a payload of 20 contiguous blocks with a
t2d-60
instance running12
simulated workers (in-memory
runtime). Because of the limit on parallel block proving, we need to have a block fully proven before adding a new one to the queue. However, because block proofs rely on the previous one, this becomes purely sequential, meaning we need block 1 to finish before kicking off segment proofs for block 17, etc...I've attached the logs below. If we grep everything related to block 1:
And later when proof for block 1 is complete and we add block 17 to the queue:
There is a delta of 2min35sec during which only aggregation proofs have been performed for this block (which are orders of magnitude faster).
An easy way to deal with this is just to increase the pool size (#644 makes it possible) but this goes against the initial purpose of this feature. Ideally we'd want to free the queue as soon as all the segments have been computed, but this seems a bit hacky.
b1_20.log