Currently, it is possible to specify which benchmark using a filter (i.e. cargo bench -- <filter>). This selects which of bench_function to run.
However, I see no way to prevent setup code from running when the corresponding benchmark is not selected. Suppose I have a set of benchmarks with a very long setup time, e.g. indexing a large dataset that could take a minute or two. I'd like to avoid running the setup code for filtered out benchmarks.
$ cargo bench one
…
init one... done
one time: [0.0000 ps 0.0000 ps 0.0000 ps]
change: [-51.749% -6.0500% +73.757%] (p = 0.86 > 0.05)
No change in performance detected.
Found 12 outliers among 100 measurements (12.00%)
4 (4.00%) high mild
8 (8.00%) high severe
init two... done ← ⚠️ I'd like to avoid this ⚠️
Currently, it is possible to specify which benchmark using a filter (i.e.
cargo bench -- <filter>
). This selects which ofbench_function
to run.However, I see no way to prevent setup code from running when the corresponding benchmark is not selected. Suppose I have a set of benchmarks with a very long setup time, e.g. indexing a large dataset that could take a minute or two. I'd like to avoid running the setup code for filtered out benchmarks.
Example