ComPWA / tensorwaves

Python fitter package for multiple computational back-ends
https://tensorwaves.rtfd.io
Apache License 2.0
10 stars 5 forks source link

Provide benchmarks tests #103

Closed redeboer closed 2 years ago

redeboer commented 4 years ago

Would be nice to profile/monitor this in a standardised way, so that we can see whether there are improvements upon each PR. The benchmarks should be similar in structure, probably making use of some shared façade functions.

What we probably want as input:

The recipe file is generated with the expertsystem based on this input. All the rest (e.g. which amplitude generator to use), should be deduced from the recipe.

Some potential tools:

redeboer commented 3 years ago

Update: the HSF Data Analysis WG considers defining benchmark PWA analyses for comparing different PWA fitter frameworks. Once those benchmarks are defined, they can be addressed in this issue.

redeboer commented 3 years ago

Also worth considering: host these benchmark test under a separate repository, otherwise it slows down CI of TensorWaves and could clutter the repo with a lot of additional testing code. Alternatively, the tests are run only upon merging into the stable branch.

redeboer commented 2 years ago

@Leongrim this may be a nice way to record performance over time https://github.com/benchmark-action/github-action-benchmark https://github.com/marketplace/actions/continuous-benchmark It also supports pytest-benchmark