Closed ashWhiteHat closed 4 months ago
Can we move this to halo2 repo ?
Yes!
@NoCtrlZ i posted some cpu profiling graph here. https://github.com/zcash/halo2/issues/418#issuecomment-989568648. It seems Domain::rotate_extended
costs lots of time. Maybe another big issue we need to solve later.
@lispc Thank you for nice feed back! I added it to description.
Some of my initial ideas after looking at the code a bit and things I plan on looking into further and implementing. Will have to implement some of these things to know how big of different it makes to know if they are worth it or not.
Most time spent in evaluation the expressions, FFT, and multiexp. Shameless plug to my post about my libsnark optimizations but many of what is said there should also be applicable here.
0
. This could probably be exploited for some moderate gains.c
value, I previously found 16
to work great for some reason but of course may be circuit dependentSTEP_HEIGHT
is actually used to constrain values (I assume a lot of these expressions will be 0
but not sure about that) so we may be able to skip a majority of the rows if that's the case. This works for the normal evalutions but not the ones in the extended domain over cosets (which is the majority of the prover time).Will have to test against actual transactions to see if some of these assumptions hold or not, currently most values are 0
because the bench circuit isn't filled with actual transactions so don't want to base too much on the current bench.
Current status: Biggest gains are from reusing intermediate results across expressions. This also impacts memory use a lot because of temporary buffers so optimizing both at the same time. Very easy to reduce memory by 50%-60% with practically no changes (without counting the memory savings by not using rotate_extended
for the evaluations which is also potentially very high). Optimizations across expressions up to 5x faster than just hardcoding the expressions (which was already 3x faster). Currently generating the h(x) poly largely on the fly, fast and memory efficient! (but there is even more room when when trading in some performance for even further memory use reduction. How big this impact I don't know yet.
Some stuff I quickly tested a bit to have more of a feeling how this behaves:
@Brechtpd Thank you! Let me check and I am going to work on that 😎
Two important areas to optimize on the circuits side are
We currently have a good idea how to greatly reduce 2) by making use of the intermediate rows (the rows between the step height) by having each row do a limited number of lookups and in the custom gates use those cells to verify the data we would normally lookup.
Because lookups add 3 degrees to the input expression, having this indirection with a lookup always having a simple table cell as input also means we have a lower degree requirement (because the input degree is always 1).
For the custom gates it should always be possible to get the degree as low as needed by making use of intermediate cells (which I guess could even be done automatically), so the theoretical minimum degree we can achieve is strictly the one imposed by the lookups.
The current minimum degree for lookups (between advice cells) is 4. This would put the lower limit on the extended domain to 4x the normal size. However I think it may be possible to lower this to 3 by removing the extra terms required to have zero knowledge (which for zkEVM we don't need anyway):
Without zero knowledge I believe we can drop the (l_last + l_blind)
term which lowers the required degree by 1. Can anyone help confirm if this is correct or not? I would think this wouldn't have any impact on anything else but I don't know much about the math so don't know for sure.
If this is true the theoretical lowest degree needed is just 3 and so the extended domain would only be 2x the size of the normal domain, otherwise it's x4 which is still pretty good I guess. :) Adding a selector to the lookups also increases the degree, so if we need the degree to be at 3 we would need dedicated cells that do the lookups on each row without exception (which shouldn't be a problem).
Without zero knowledge I believe we can drop the
(l_last + l_blind)
term which lowers the required degree by 1. Can anyone help confirm if this is correct or not?
I think it won't break anything, but would like to hear @therealyingtong's thought.
Adding a selector to the lookups also increases the degree, so if we need the degree to be at 3 we would need dedicated cells that do the lookups on each row without exception (which shouldn't be a problem).
Ideally we don't need to always shuffle the input and table in the same grand product IIUC, we can shuffle input and table separately with their own grand products to take down the degree to 2. If we really need selector on input to make layout easier, then I think this is a possible solution.
upstream halo2 introduced AST for ploy and performance get improved. (details : https://github.com/zcash/halo2/pull/447#issuecomment-1016947456) We may need to merge the upstream later.
Introducing assembly was done on https://github.com/appliedzkp/pairing/pull/5. It improved prove and setup slightly, and CPU dramatically as following. https://github.com/appliedzkp/pairing/pull/5#issue-1109232627
Assembly Optimization In The Future
add
and sub
, double
mul
and square
goff implementation will be hint.
upstream halo2 introduced AST for ploy and performance get improved. (details : https://github.com/zcash/halo2/pull/447#issuecomment-1016947456) We may need to merge the upstream later.
Rebase and introduce assembly reduce prover time 72.02%. https://github.com/appliedzkp/zkevm-circuits/pull/302#issue-830046561
Optimize poly evaluation and fft reduce prover time 75.6%. https://github.com/appliedzkp/zkevm-circuits/pull/396#issuecomment-1096423824
Most of these improvements now apply to primitives that are implemented in halo2curves so the issue is no longer relevant.
I noted improvement list in order not to forget.
ff
to assemblyThank you Onur for giving ideas!