Open tgymnich opened 2 years ago
In the package OptimizationProblems.jl we have a list of optimization models that could serve as a benchmark potentially. Many of the functions there are scalable. Could that be any useful for benchmarks?
AirspeedVelocity.jl is designed for this type of thing in Julia. All you need is to add a benchmark/benchmarks.jl
file that defines a const SUITE = BenchmarkGroup()
, add this action, and you will get a comment like this:
added to every single PR. It has been really useful for catching performance regressions before they appear, and evaluating readability-performance tradeoffs. The load time measurements are pretty useful as well.
So far it is installed on SymbolicRegression.jl, DynamicExpressions.jl, and SymbolicUtils.jl
It would be nice if we had a small test suite, to measure possible performance regressions. And use something like: https://github.com/benchmark-action/github-action-benchmark to alert us of possible regressions and watch the benchmark numbers over time.