Closed sfmig closed 6 months ago
Patch and project coverage have no change.
Comparison is base (
8665e74
) 0.00% compared to head (c100671
) 0.00%. Report is 3 commits behind head on main.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
superseded by #94
This PR adds
asv
benchmarks to time the workflow defined in #15The benchmarks time the full workflow and also its individual steps. They are grouped under classes: the
time_
methods of one class share setup and teardown functions and other benchmark attributes.Some points where feedback is welcome:
I am not sure we really need a teardown function after each benchmark.... 🤔 (right now it just removes the cellfinder-benchmark temporary directory)
I was looking for a way of defining
setup
functions for each benchmark abstracting the common bits to avoid too much repetition. I ended up with this approach that uses the@classmethod
decorator, but not sure it is the best way (I'm not super familiar with OOP). I also considered setup_cache but I think it serves a different purpose (mainly running computationally intense setup methods only once). I basically wanted to do what it says in the docs:but not sure how. Any thoughts welcome!
Should we have the benchmarks in a different repo? I know we want to minimise number so maybe here they are fine. I don't see strong arguments for or against apart from that one.
Is it repetitive to benchmark the whole workflow and also its parts? Should we just benchmark the whole, and rely on profiling when issues are found?
the main advantage of
asv
seems its ability to identify performance regressions across commits....but here the commits we track against would be those from the workflows repo. I understand it would be more interesting for us to track performance regressions for thecellfinder-core
commits right? I think it could be done but we may need to rethink the structure.replace "cellfinder-core" with "cellfinder"