Closed mateuszmlc closed 1 year ago
Benchmark runs multiple times and the average (mean) and std deviation and output. Number of iterations should be configurable.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
zk-benchmarks | ❌ Failed (Inspect) | Sep 15, 2023 2:10am |
Yeah, or named parameters like Clap
. Let's leave that for another day though, this is fine for now.
Is there a way to increase the timeout? Maybe globally?
Is there a way to increase the timeout? Maybe globally?
Just to clarify, it doesn't timeout the function (doesn't stop it). If a first (out of 10) run finishes in 6 seconds, then it launches another one, totaling 12 seconds, at which point it doesn't do any more new runs.
If you specify the iterations manually at parameter definition, then the "timeout" doesn't apply, it will run the functions for exactly 10 seconds.
Being able to change this makes sense, it would be more convenient for users than specifying iterations manually for each parameter. I'll make the change, will add it on the BenchmarkConfig struct.
Existing benchmarks will fail, need to wait for #28 to be merged and make a few small changes.
By default a benchmark function will run up to 10 times, stopping once it's been executing for longer than 10 seconds.
You can change a
("1 byte", 1)
parameter to(100, "1 byte", 1)
to always run it 100 times. If you do that you also need to update all the other parameters, as they need to be the same type. If we wanted the user to be able to change the default on just 1 parameter, we would need them to add.into()
to each parameter:[("1 byte", 1).into(), (100, "10 bytes", 10).into()]
. We could also simplify this with proc macros by providing each parameter in a separate attribute, instead of providing an array to a single#[bench([...])]
attribute:Then we can append the .into()s automatically.