Closed redeboer closed 2 years ago
Update: the HSF Data Analysis WG considers defining benchmark PWA analyses for comparing different PWA fitter frameworks. Once those benchmarks are defined, they can be addressed in this issue.
Also worth considering: host these benchmark test under a separate repository, otherwise it slows down CI of TensorWaves and could clutter the repo with a lot of additional testing code. Alternatively, the tests are run only upon merging into the stable branch.
@Leongrim this may be a nice way to record performance over time
https://github.com/benchmark-action/github-action-benchmark
https://github.com/marketplace/actions/continuous-benchmark
It also supports pytest-benchmark
Would be nice to profile/monitor this in a standardised way, so that we can see whether there are improvements upon each PR. The benchmarks should be similar in structure, probably making use of some shared façade functions.
What we probably want as input:
The recipe file is generated with the expertsystem based on this input. All the rest (e.g. which amplitude generator to use), should be deduced from the recipe.
Some potential tools:
pytest-benchmark
(nicely integrated withpytest
, though seems to be more for micro-benchmarks)pycallgraph
(seems rather outdated)timeit