I'm trying to benchmark a set of API endpoints, and one of the key pieces of information I'd like to be able to store is how many queries were made - and in fact what queries were made. Ideally, I'd implement this with a debug database cursor, wrap the benchmark function in a context manager and store the results in extra_info like this:
with capture_queries() as queries:
result = benchmark(get_fn, api_url)
result.extra_info = {"query_count": queries.count()}
But it seems that extra_info has to be defined before the run, not after. I could make the same API call twice (one pure, one with benchmark, but this will clearly make the test take longer, and could affect the benchmark results.
What's the expected process here? Should I make a plain call first, then call with benchmark?
I'm trying to benchmark a set of API endpoints, and one of the key pieces of information I'd like to be able to store is how many queries were made - and in fact what queries were made. Ideally, I'd implement this with a debug database cursor, wrap the
benchmark
function in a context manager and store the results inextra_info
like this:But it seems that
extra_info
has to be defined before the run, not after. I could make the same API call twice (one pure, one withbenchmark
, but this will clearly make the test take longer, and could affect the benchmark results.What's the expected process here? Should I make a plain call first, then call with
benchmark
?