NeuralEnsemble / elephant

Elephant is the Electrophysiology Analysis Toolkit
http://www.python-elephant.org
BSD 3-Clause "New" or "Revised" License
200 stars 92 forks source link

ENH: integrate automated benchmarking tool #41

Open btel opened 9 years ago

btel commented 9 years ago

pandas uses vbench to run benchmarks on each new version of code. This way all possible "performance regressions" are early detected. Something similar could be useful for elephant.

More information:

dizcza commented 3 years ago

I've looked into CircleCI automated unittests benchmarks - there are none available (Jan 2021). The developers should make such by themselves. Typically, it's done as a group of tests for 7 to 10 most used functionalities in a package with the timings printed in sdtout or an artifact file that can be downloaded at each build. There is no software that parses such output timings automatically and makes nice plots with month-year as the X axis and tests duration as the Y axis.

Moritz-Alexander-Kern commented 10 months ago

Firstly, I'd like to mention that there's now a GitHub Action available for continuous benchmarking, see here: https://github.com/benchmark-action/github-action-benchmark

This action could be a useful addition to our testing pipeline, automating the process of running benchmarks with each new version of the code.

However, as of now, there hasn't been a specific focus on developing benchmarks for Elephant within the automated testing suite. Developing benchmarks requires a thoughtful approach, and ideally, we would need to create tests that provide meaningful insight into the performance of Elephants functionalities.

While there isn't a dedicated effort towards this yet, it's an open invitation to anyone. If you have the time and interest, contributing benchmarks for Elephant or, perhaps, benchmarks derived from real-world use case could kick off this project.

Moreover, if any of you have encountered performance regressions in the past, please take a moment to report them. Having real-world cases as precedence will also serve as a good starting point.

If you would like to develop a benchmark, please reach out to us, you are most welcome!