Open hop opened 7 years ago
You could stick a contraption like this in your conftest.py
:
from functools import wraps
from pytest import Function
def pytest_collection_modifyitems(items):
for item in items:
if 'benchmark' not in getattr(item, 'fixturenames', ()) and isinstance(item, Function):
item.fixturenames.append('benchmark')
item._fixtureinfo.argnames += 'benchmark',
def wrap(obj):
@wraps(obj)
def test_wrapper(benchmark, **kwargs):
benchmark.pedantic(obj, kwargs=kwargs, iterations=1, rounds=1)
return test_wrapper
item.obj = wrap(item.obj)
Note that there's no summing of totals, but you could do it by using benchmark.stats['min']
(or max, mean, avg etc) inside the test_wrapper
func.
Another problem is that wrong __module__
is set on test_wrapper, so your output will look like:
plugins: benchmark-3.1.0a2
collected 2 items
tests.py::test_foo <- conftest.py PASSED
tests.py::test_bar <- conftest.py PASSED
You can fix if by patching, and it's left as exercise to the reader 😬
Almost forgot, this works with function tests, dunno (or care 😁) about TestCase-style tests.
Worked great, thank you! unittest2pytest took care of the TestCase detritus.
This should be included by default, or at least be documented!
I've noticed that it does not work for parametrized fixtures.
I've not found a solution, but the difference there is that the item
has
a callspec.
The problem is:
TypeError: test_parametrized() got an unexpected keyword argument 'benchmark'
for item in items:
if ('benchmark' not in getattr(item, 'fixturenames', ()) and
isinstance(item, Function)):
if hasattr(item, 'callspec'):
item = item._pyfuncitem
item.fixturenames.append('benchmark')
item._fixtureinfo.argnames += 'benchmark',
def wrap(obj):
@wraps(obj)
def test_wrapper(benchmark, **kwargs):
benchmark.pedantic(obj, kwargs=kwargs, iterations=1,
rounds=1)
return test_wrapper
item.obj = wrap(item.obj)
Oooof ... so should there be a cookbook section or something in the docs? What else should it have?
pytest
already gives execution times for test suites and test cases. Execute pytest with --junitxml=output.xml
and inspect the output - an aggregate time is provided on testsuite
nodes and individual times are given on testcase
nodes. For example:
<?xml version="1.0" encoding="utf-8"?>
<testsuite errors="0" failures="0" name="pytest" skips="1" tests="7" time="8.512">
<testcase classname="tests.test_import_times"
file="tests/test_import_times.py"
line="83"
name="test_fn[site]"
time="2.0289511680603027">
</testcase>
<testcase classname="tests.test_import_times"
file="tests/test_import_times.py"
line="83"
name="test_fn[asyncio]"
time="1.1191785335540771">
</testcase>
<!-- ... -->
</testsuite>
Is this different than what was requested in the original issue?
@chrahunt that would be the same as --durations
, I believe, so no, not what I wanted at all.
I would like some way to quickly (i.e. without having to modify the test suite) get timing data for test cases, as well as a sum total for the whole test suite.
Something like
--durations
but with an optional (or automatically determined, like withtimeit
) number of repetitions.Would this be a fit for this plugin?