Open terror opened 2 years ago
If you don't need any statistical analysis then you don't need criterion and are probably better off timing the function yourselves. You can do this in a benchmark program that exists in parallel with your ordinary criterion benchmarks.
Would this be a big change to support in criterion? It would be nice to have all our benchmarks be criterion benchmarks, for consistency, and since there are other useful features of criterion that we would need to replicate, in particular tracking whether benchmarks are getting better or worse over time.
The analysis machinery doesn't work with only a single data point. You can get around that by duplicating that data point but that's kinda hacky. I'll still review PRs related to this, though. If you can find a robust implementation then I'll gladly accept it.
Hi! I have the same problem and I wanted to ask a question from ignorance: do the statistical analyses break if you run, for example, 3 samples? Maybe you could lower the threshold without damaging the internal structure, the computationally time-consuming cases would still be a problem, but this time they would be a small one
Currently working on a project where we need to benchmark a function that performs a very expensive initialization step upfront and then proceeds to process data. Since the function requires a slow initialization step and performs repetitive data processing, we would get the information we need by running the function a single time as opposed to running it the current minimum number of times (10).
Would it be possible to allow passing in a smaller sample size, in our case 1? In our case, we don't need statistical analysis, because our benchmark is a measure of bytes process per second so we can just divide total bytes processed by runtime to get an accurate indication of performance.