Open zkurtz opened 5 years ago
This is indeed suboptimal (also because mbo never jumps to method = a
) but also a very special case.
We have the setting ctrl = setMBOControlInfill(ctrl, filter.proposed.points = TRUE)
which you can activate but apparently we have not implemented it for discrete parameters.
This would generate a random proposal when the proposed point equals an already evaluated point.
I think we should consider this for discrete parameters as an easy remedy.
Otherwise you could tweak the surrogate and the infill criterion settings. For a low number of discrete values dummy encoding and kriging often is a good choice.
lrn = makeLearner("regr.km", covtype = "matern3_2", optim.method = "gen", control = list(trace = FALSE), predict.type = "se")
lrn = makeDummyFeaturesWrapper(lrn)
lrn = makeImputeWrapper(lrn, classes = list(numeric = imputeMax(10), factor = imputeConstant("<missing>")))
lrn = makeRemoveConstantFeaturesWrapper(lrn)
Note that this will have problems with your given function because of the constant outcome for method = "b"
.
Working on it in #444
In a simple example of a mixed space optimization with mostly-default parameters and a deterministic objective,
mbo
repeatedly evaluates the same point:The result shows that one point got evaluated repeatedly, even though I set
noisy = FALSE
in the objective. Such repeated evaluation is costly in other settings -- is there any reason to allow it?