It is useful to be able to benchmark many different variants of an algorithm when you're exploring its parameter space.
Some algorithm implementations (like QF) just publish lots of different parameter variants. This clutters up the algorithm lists and makes it hard to choose which ones are relevant. I wrote a bunch of scripts to automate benchmarking variants during my own algorithm development process, and it was really useful. I then picked a generally good set of default parameters for the main algorithms, and only publish different code for different values of Q (as that affects the hash algorithm to use).
So at some later stage, I propose it would be useful to build this capability into smart, or by providing a bunch of scripts that automate it as I did before.
It is useful to be able to benchmark many different variants of an algorithm when you're exploring its parameter space.
Some algorithm implementations (like QF) just publish lots of different parameter variants. This clutters up the algorithm lists and makes it hard to choose which ones are relevant. I wrote a bunch of scripts to automate benchmarking variants during my own algorithm development process, and it was really useful. I then picked a generally good set of default parameters for the main algorithms, and only publish different code for different values of Q (as that affects the hash algorithm to use).
So at some later stage, I propose it would be useful to build this capability into smart, or by providing a bunch of scripts that automate it as I did before.