Our configuration testing system is very ad-hoc. We have a small list of combinations of configuration options we expect to work together, and an even smaller list of options that are marked as “bad”.
I wonder whether we could construct a more systematic test system to say “look, all these options play well with each other”. That would go exponential in the number of options, but there may be some ways around this.
Our configuration testing system is very ad-hoc. We have a small list of combinations of configuration options we expect to work together, and an even smaller list of options that are marked as “bad”.
I wonder whether we could construct a more systematic test system to say “look, all these options play well with each other”. That would go exponential in the number of options, but there may be some ways around this.