ionelmc / pytest-benchmark

py.test fixture for benchmarking code
BSD 2-Clause "Simplified" License
1.22k stars 115 forks source link

Store data in `extra_info` post-run #256

Open danielsamuels opened 4 months ago

danielsamuels commented 4 months ago

I'm trying to benchmark a set of API endpoints, and one of the key pieces of information I'd like to be able to store is how many queries were made - and in fact what queries were made. Ideally, I'd implement this with a debug database cursor, wrap the benchmark function in a context manager and store the results in extra_info like this:

with capture_queries() as queries:
    result = benchmark(get_fn, api_url)
    result.extra_info = {"query_count": queries.count()}

But it seems that extra_info has to be defined before the run, not after. I could make the same API call twice (one pure, one with benchmark, but this will clearly make the test take longer, and could affect the benchmark results.

What's the expected process here? Should I make a plain call first, then call with benchmark?