Open shawntabrizi opened 2 years ago
One way of fixing this would be to add a new mechanism to the benchmarking machinery to introduce simple greater-than/less-than constraints between the components.
For example, the benchmarks use these constants:
const VOTERS: [u32; 2] = [1000, 2000];
const ACTIVE_VOTERS: [u32; 2] = [500, 800];
Obviously ACTIVE_VOTERS
cannot be bigger than the total number of VOTERS
.
In one of the benchmarks we have this:
let a in (T::BenchmarkingConfig::ACTIVE_VOTERS[0]) .. T::BenchmarkingConfig::ACTIVE_VOTERS[1];
let v = T::BenchmarkingConfig::VOTERS[1];
So we could add something like this to constraint these (imaginary syntax):
let a in (T::BenchmarkingConfig::ACTIVE_VOTERS[0]) .. T::BenchmarkingConfig::ACTIVE_VOTERS[1] where a <= v;
And then we could extend the bounds of the benchmarks, and the benchmarking machinery would only call the benchmark only in cases where a <= v
. (It'd still loop through the full range, but at each combination of parameters it'll check whether those fulfill the constraints and chose not to run them.)
Not sure if we actually want to do this, but it is a possibility.
Somehow the FEPS weights were not updated in the last MR paritytech/substrate#12325 election-provider-support. Did you use the script or do it manually?
@gpestana we can also now look into the cost of verifying an un-reduced solution.
There are still a few benches which previously incorrectly generated with zero weights and/or now use the minimum times as the base weight and have one of their components start at a high number (mainly from
pallet_election_provider_multi_phase
), and I'd still like to do something about them (which is not trivial because the components' ranges cannot overlap for those benchmarks to correctly execute so you can't naively bring them down to zero), but looking at the numbers the minimums were bumped by only like 50%~70% so it's not the end of the world.Originally posted by @koute in https://github.com/paritytech/substrate/pull/12325#pullrequestreview-1171973945