Open jon-chuang opened 3 years ago
I think we should also better separate out MSM benches in the benchmarks, the time delays are way too long for benching field arithmetic changes.
Actually, I've come to realise the long benching times comes not from the inadequacy of the benchmarking framework, but from the fact that generating the random data is done on a single thread and takes very long.
My suggestion is to use constant data which we can generate in a data.rs file.
Alternatively, we can use a fn_once
Ah that's unforunate, but we can just parallelize the generation of the data, or maybe just sample a single input, and then mutate it somehow (eg: double it) to obtain the rest of the inputs?
doubling it isn't great, because that introduces data dependencies into the core benchmark
No I mean, instead of doing
for i in 0..1000 {
input.push(G::rand(&mut rng)
}
bencher.iter(...);
we do
let g = G::rand(&mut rng);
for i in 0..1000 {
input.push(*g.double_in_place());
}
bencher.iter(...);
I've used lazy static and it's insanely fast now, so I think no one will have any more complaints. I'll push changes soon
oh but for MSM we might have to use a pseudo-uniform distribution, generated by multiplying generator with random scalars in small range.
Actually, I realised for the particular case of MSM, bencher by default takes at least 300 samples, so it's just completely inadequate.
Benchmarking currently uses a statistical confidence to determine the number of iterations. However, this can lead to extremely long-running benchmarks. Hence, I suggest to limit the number of iterations by utilising benchmarking based on a max timeout.
This is the standard seen in C++ and Go. In this paradigm, fast results for fast developer iteration are more important than statistical significance.