Closed bjodah closed 6 years ago
One down side with these benchmarks is that they are somewhat contrived, and the sheer number of benchmarks might have a negative effect on brevity of the results..
I haven't evaluated them. I just noticed that we had some benchmarks that have probably not been looked at in a long time. I don't know if they are worth adding. @asmeurer or @certik may know better.
I'm inclined to believe that the meijerint benchmarks are useful, since they were written by the author of the algorithm. I know a lot of the benchmarks that are there can stand to be rewritten, though.
Not sure about the number. Does ASV handle a large number of benchmarks well? It has a feature to only show things that have slowed down significantly, right?
It can run as many benchmarks as you want. You can start by doing a course mesh over the commit history and then a finer mesh or bisection on specific benchmarks and commit ranges to narrow down the offending commits. It can also skip previously failed or successful benchmarks runs.
The dev version of asv can list benchmarks whose performance regressed, and commit ranges where this occurred. The meijerg integration here might also be more suitable as a parameterized benchmark, since it seems it is only the function to be integrated that varies.
See #7