mmaker / zkalc

A web tool that helps you compare and visualize the performance of cryptographic operations
https://zka.lc
BSD 3-Clause "New" or "Revised" License
88 stars 14 forks source link

Unify perf pipeline between gnark and arkworks #13

Closed asn-d6 closed 1 year ago

asn-d6 commented 1 year ago

In arkworks, we run all the tests (even ones not currently used by zkalc) and dump them on a file, and then criterion.py splits them into curves.

In gnark, we create one output file per curve, and then golang.py needs to be run individually for each file.

We should think of how we want our pipeline to be.

mmaker commented 1 year ago

More details for this issue. Once we run make inside backend, all data is stored in perf/data. There, files follows the structure machine/library.json or machine/library-curve.json

From there, we must produce the frontend jsons that are stored in frontend/data. There, files follow the structure curve/library/machine.json.

Right now the conversion is done with a benchmark parser that lives in perf/benchmark_parser. As written in the readme, this process is done manually and is still prone to errors. We must make this step automatic.

To complicate things, this parser is sometimes fed with the full file machine/library-curve.json (that's the case for go files), sometimes with machine/library.json after filtering only curve results that are pertinent (with a good old grep, that's the case for criterion files). This lack of uniformity is not good either. We should be able to modify either our codebase to filter properly for curves, and zkalc.sh should prepend more information about the curve we are working on.

mmaker commented 1 year ago

the pipeline is now there after the merge with zkharness' bench-data! :)