Open mcalancea opened 3 days ago
If we just talked about capture this unssignment error, I think we can achieve it from different path: in unittest we support init vector into 2 different default value with same execution trance, and then compare 2 witnesses which should be identical. An unequality indicate there are some unassigment bug. And with that, we can capture the unassignment error in early stage.
@hero78119
Thank you! This is still work in progress; I'm examining some ways to make it faster while keeping it clean/safe. My bench results are a bit better for some reason:
add_op_20/prove_add/prove_add_log2_20
time: [4.7420 s 4.7740 s 4.8031 s]
change: [+1.3563% +2.2286% +3.0911%] (p = 0.00 < 0.05)
Performance has regressed.
add_op_21/prove_add/prove_add_log2_21
time: [9.9516 s 10.189 s 10.450 s]
change: [-1.7437% +0.5352% +2.8403%] (p = 0.70 > 0.05)
No change in performance detected.
What regression % (evaluated at yours) would you consider acceptable?
Could you please add to the PR description a very brief overview over what we are doing, and more importantly why?
(The what can be really brief, because the text of the PR answers that question in detail.)
Thanks!
This is a great effort! As some moment I am also curious what would be the impact if we remove MaybeUint feature and use raw vector as we only have one implementation.
Yes, I have been rather suspicious of MaybUninit
. It looks like a bit of a minefield, and I'm not sure it's actually worth it.
... I'm examining some ways to make it faster while keeping it clean/safe. My bench results are a bit better for some reason:
add_op_20/prove_add/prove_add_log2_20 time: [4.7420 s 4.7740 s 4.8031 s] change: [+1.3563% +2.2286% +3.0911%] (p = 0.00 < 0.05) Performance has regressed. ...
What regression % (evaluated at yours) would you consider acceptable?
I noticed the different with yours vs remote ceno benchmark server probably on environment number of cores.
In multicore environment (16x2 cores), removing MaybeUinit
the regression will be more significant, e.g. + ~6% regressed.
To me any regression of % is somehow unacceptable.
So I would suggest to go from another way: in unittest initialized with 2 different default value with same witness, then check the witness polynomial equality. This help us to identify the problem while not sacrificing performance.
If you want some inspiration for how to do this kind of change, you might want to have a look at how plonky3 changed the organisation of its data compared to plonky2. (It might be a bit much too read, though.)
@hero78119 There is a draft of the approach based on testing here: https://github.com/scroll-tech/ceno/pull/597
To get back to optimal performance without unsafe types, there could be another constructor that accepts a vector or an iterator. Callers can built their Vec with e.g. flatten and collect.
See also the issue https://github.com/scroll-tech/ceno/issues/600
This is a great effort! As some moment I am also curious what would be the impact if we remove MaybeUint feature and use raw vector as we only have one implementation.
I just do a very quick tried on this branch against master branch on riscv_add benchmark with command on remote ceno benchmark machine, with command
and the result turns out to be ~+6% slower on e2e latency.
I believed with fibonacchi example it might be even much slower. So it's still nessesary to this feature for keeping high performance.