Open nmoskopp opened 1 year ago
This would be great!
Is there currently a way to omit failed tests from the timing statistics? If we have nondeterminism and record a success rate, it might be desirable to only account for successful runs in the statistics.
I have a use case for tracking the performance and success rate of non-deterministic functions.
The following function serves to outline the scenario:
I have played around and arrived at the following result:
To get the new column
succ
actually displayed, I had to also:succ
topytest_benchmark.utils.ALLOWED_COLUMNS
.pytest_benchmark.table.display
so it showssucc
.(How exactly to achieve those two things is left an an exercise for the reader.)
While this does work, I am unsure if my solution could be upstreamed easily. How should I do it if I want my solution to be merged into
pytest-benchmark
?Alternate and related approaches:
benchmark.pedantic
that makes it continue on exceptions, but gives it an argument of the list of exceptions caught (like[None, None, RuntimeError, None, RuntimeError]
).benchmark.pedantic
to change the return type to a list of all results, then set up the benchmarked function so that it catches relevant exceptions and returns whatever I want.extra_info
keys in the terminal table.