Concisely describe the proposed feature
I'd like to make use of the pytest plugin pytest-benchmark
instead of our own ad-hoc benchmarking framework which I've done with.
Describe the solution you'd like (if any)
First check out the benchmarks/ directory, these are the benchmarks that we've already have.
Make function names like benchmark_ -> test_. And make use of that plugin, as describe in their project page.
When runing benchmark, simply use pytest benchmarks/. You may change the behavior ti benchmark to follow the same logic as ti test does.
Thanks for proposing this! pytest-benchmark looks cool, but we certainly need to investigate more before adopting this solution.
For example, how well does it support comparison against past results?
Concisely describe the proposed feature I'd like to make use of the pytest plugin pytest-benchmark instead of our own ad-hoc benchmarking framework which I've done with.
Describe the solution you'd like (if any) First check out the
benchmarks/
directory, these are the benchmarks that we've already have. Make function names likebenchmark_
->test_
. And make use of that plugin, as describe in their project page. When runing benchmark, simply usepytest benchmarks/
. You may change the behaviorti benchmark
to follow the same logic asti test
does.Additional comments