noir-lang / zk_bench

Benchmark circom and noir on some standard primitives from circomlib/noir stdlib
MIT License
0 stars 0 forks source link

Remove Noir benchmarking #5

Open TomAFrench opened 2 months ago

TomAFrench commented 2 months ago
### Prerequisites
- [ ] https://github.com/AztecProtocol/aztec-packages/pull/6907
- [ ] https://github.com/noir-lang/noir/issues/4794

Currently nargo handles proving/verifying (although mostly as a passthrough to the backend), as part of https://github.com/noir-lang/noir/issues/4960 we're removing this so it doesn't make any sense to benchmark a "noir proof". Instead backends should implement benchmarks for generating proofs from ACIR (started in https://github.com/AztecProtocol/aztec-packages/pull/6155).

This repository then only needs to produce a report for circom proofs for a set to programs equivalent to the Noir programs which are used for benchmarks.

TomAFrench commented 2 months ago

I do think that won't really have need for rust in this repository. We could likely get away with just having a shell script iterating through the circom projects and making the necessary calls to circom/snarkjs.

Savio-Sou commented 1 month ago

Keeping all benchmarks in one place would be very helpful in minimizing barriers to access information and modify benchmarks though.

As it stands, this repo feels like a great framework for less technical folks (e.g. me, Dev Rel) to also modify and extend tests directly, without the need to funnel through Engineering. Not sure if that carries over if folks have to touch the more complicated Barretenberg repo and ACVM folder instead for changes.

If the intention is to minimize backend specific code under noir-lang, transferring this repo as is to AztecPackages and continue from there seems like a better way out.

Savio-Sou commented 1 month ago

Circling back from further conversations, the benchmarking suites in aztec-packages / noir are useful for development of the language and barretenberg itself.

Using the same suites for community-facing benchmarks avoids the need to maintain multiple suites serving largely similar functionalities.

Updated body of this Issue with tasks to bring those suites into a useful state for community-facing benchmarking. We shall aim for tackling this Issue only when they are cleared.