Closed Korijn closed 2 years ago
Basically you need to group on something related to the current algorithm run in the test.
The simplest way is to just parametrize your test function with the algorithm, eg: https://github.com/ionelmc/python-lazy-object-proxy/blob/03003b012feef472b4bb54b971a8f4782a41f93f/tests/test_lazy_object_proxy.py#L1770-L1775
It's obvious how to compare the performance of code over time (as changes are made) with pytest-benchmark. But how do I compare multiple configurations?
Hypothetical example: Let's say that I've implemented an algorithm with three different internal memory allocation strategies, and I want to compare them.
I guess I would have three different setup functions to configure the memory allocation strategies, and a single benchmark routine. Ideally, pytest-benchmark would run the three benchmarks, and then present me the performance report, including a comparison of the three strategies. How do I accomplish this?
If I can't, what other tool should I use?