I asked this question in pytest core repo: https://github.com/pytest-dev/pytest/issues/9433
They suggested coming here in case this would be a useful feature within pytest-benchmark.
What do you think?
As always, happy to implement.
What's the problem this feature will solve?
I want to do heavy benchmarking of my pytest tests -- running them multiple times and getting average/mean statistics for the runs.
Something like pytest tests/integration/ --durations=0 --count=100.
Currently the report outputs test durations for each test separately, e.g.:
It clutters the output and makes it harder to catch outliers.
Describe the solution you'd like
I'd like to have some flag (possibly, owned by pytest-repeat) that would command to merge stats for the repeated runs durations and output them in a structured way, e.g.
I asked this question in
pytest
core repo: https://github.com/pytest-dev/pytest/issues/9433 They suggested coming here in case this would be a useful feature withinpytest-benchmark
. What do you think? As always, happy to implement.What's the problem this feature will solve?
I want to do heavy benchmarking of my pytest tests -- running them multiple times and getting average/mean statistics for the runs. Something like
pytest tests/integration/ --durations=0 --count=100
. Currently the report outputs test durations for each test separately, e.g.:It clutters the output and makes it harder to catch outliers.
Describe the solution you'd like
I'd like to have some flag (possibly, owned by pytest-repeat) that would command to merge stats for the repeated runs durations and output them in a structured way, e.g.
Ultimately, this would help when more precise analysis of test durations.
Alternative Solutions
I looked at
pytest-benchmark
but apparently it solves another problem of benchmarking a particular function/fixture.Or maybe my googling skills have deteriorated...
Any comments are appreciated!
If we decide this functionality is indeed useful, I'm happy to implement.