Currently, test_benchmarks.py contains a single test that runs all criterion benchmarks. With increasing number of benchmarks, we thus increase the duration of this test, and need to adjust its timeout. Instead, we want to parameterize the test by the list of criterion benchmarks we have in the repository. This means introducing a fixture that yields the benchmarks listed by cargo bench --all -- --list individually. Then the test itself would only run the benchmark it is provided. This should also make it easier to see which benchmarks are failing.
Currently,
test_benchmarks.py
contains a single test that runs all criterion benchmarks. With increasing number of benchmarks, we thus increase the duration of this test, and need to adjust its timeout. Instead, we want to parameterize the test by the list of criterion benchmarks we have in the repository. This means introducing a fixture that yields the benchmarks listed bycargo bench --all -- --list
individually. Then the test itself would only run the benchmark it is provided. This should also make it easier to see which benchmarks are failing._Originally posted by @pb8o in https://github.com/firecracker-microvm/firecracker/pull/4830#discussion_r1784428511_