scroll-tech / ceno

Accelerate Zero-knowledge Virtual Machine by Non-uniform Prover Based on GKR Protocol
Apache License 2.0
57 stars 7 forks source link

(WIP) Refactor row major matrix #624

Open mcalancea opened 3 days ago

hero78119 commented 3 days ago

This is a great effort! As some moment I am also curious what would be the impact if we remove MaybeUint feature and use raw vector as we only have one implementation.

I just do a very quick tried on this branch against master branch on riscv_add benchmark with command on remote ceno benchmark machine, with command

### on master branch
cargo bench --bench riscv_add --package ceno_zkvm -- --save-baseline baseline

### on this branch
cargo bench --bench riscv_add --package ceno_zkvm -- --baseline baseline

and the result turns out to be ~+6% slower on e2e latency.

## master branch
add_op_20/prove_add/prove_add_log2_20
                        time:   [1.5172 s 1.5615 s 1.6110 s]

### this branch
add_op_20/prove_add/prove_add_log2_20
                        time:   [1.6454 s 1.6695 s 1.6898 s]
                        change: [+4.1692% +6.3642% +8.4701%] (p = 0.00 < 0.05)

I believed with fibonacchi example it might be even much slower. So it's still nessesary to this feature for keeping high performance.

hero78119 commented 3 days ago

If we just talked about capture this unssignment error, I think we can achieve it from different path: in unittest we support init vector into 2 different default value with same execution trance, and then compare 2 witnesses which should be identical. An unequality indicate there are some unassigment bug. And with that, we can capture the unassignment error in early stage.

mcalancea commented 3 days ago

@hero78119

Thank you! This is still work in progress; I'm examining some ways to make it faster while keeping it clean/safe. My bench results are a bit better for some reason:

add_op_20/prove_add/prove_add_log2_20
                        time:   [4.7420 s 4.7740 s 4.8031 s]
                        change: [+1.3563% +2.2286% +3.0911%] (p = 0.00 < 0.05)
                        Performance has regressed.
add_op_21/prove_add/prove_add_log2_21
                        time:   [9.9516 s 10.189 s 10.450 s]
                        change: [-1.7437% +0.5352% +2.8403%] (p = 0.70 > 0.05)
                        No change in performance detected.

What regression % (evaluated at yours) would you consider acceptable?

matthiasgoergens commented 1 day ago

Could you please add to the PR description a very brief overview over what we are doing, and more importantly why?

(The what can be really brief, because the text of the PR answers that question in detail.)

Thanks!

matthiasgoergens commented 1 day ago

This is a great effort! As some moment I am also curious what would be the impact if we remove MaybeUint feature and use raw vector as we only have one implementation.

Yes, I have been rather suspicious of MaybUninit. It looks like a bit of a minefield, and I'm not sure it's actually worth it.

hero78119 commented 1 day ago

... I'm examining some ways to make it faster while keeping it clean/safe. My bench results are a bit better for some reason:

add_op_20/prove_add/prove_add_log2_20
                        time:   [4.7420 s 4.7740 s 4.8031 s]
                        change: [+1.3563% +2.2286% +3.0911%] (p = 0.00 < 0.05)
                        Performance has regressed.
...

What regression % (evaluated at yours) would you consider acceptable?

I noticed the different with yours vs remote ceno benchmark server probably on environment number of cores. In multicore environment (16x2 cores), removing MaybeUinit the regression will be more significant, e.g. + ~6% regressed.

To me any regression of % is somehow unacceptable.

So I would suggest to go from another way: in unittest initialized with 2 different default value with same witness, then check the witness polynomial equality. This help us to identify the problem while not sacrificing performance.

matthiasgoergens commented 1 day ago

If you want some inspiration for how to do this kind of change, you might want to have a look at how plonky3 changed the organisation of its data compared to plonky2. (It might be a bit much too read, though.)

naure commented 22 hours ago

@hero78119 There is a draft of the approach based on testing here: https://github.com/scroll-tech/ceno/pull/597

naure commented 22 hours ago

To get back to optimal performance without unsafe types, there could be another constructor that accepts a vector or an iterator. Callers can built their Vec with e.g. flatten and collect.

See also the issue https://github.com/scroll-tech/ceno/issues/600