Open zkurtz opened 5 years ago
Hi @zkurtz, thanks for your input. I'd like to add:
Regarding your suggestions:
Do you have any reference that says that this is a good idea? I stumbled upon the espsilon
value here but I have not found the reference yet.
Do you mean the nugget setting? You can already set that when you define the learner manually. lrn = makeLearner("regr.km", nugget = 0.5)
You can also configure the kernel directly using mlr (see above)
Again, this should be all learner settings.
+1 for the adaptive CB feature.
(1) I don't have a reference.
(2) Yes nugget
looks like the thing to start with.
More generally with (2)-(4) I'm not surprised to hear that these are learner settings. Adding a vignette that highlights how to use these settings to influence the exploration-exploitation trade off for the two default learners would be going above-and-beyond, but I imagine it would be very useful.
Here are ways that I see mlrMBO currently offering control over exploration vs exploitation for single-objective tuning:
cb.lambda
parameter offers pretty direct control for the lower confidence bound criterion as in equation (2)setMBOControlInfill(... interleave.random.points=?)
offers a way to inject any approach with some amount of 'pure exploration'.What other controls exist? Here are some I'd like:
makeMBOInfillCritEI
to accept thecb.lambda
parameter too, as a coefficient on the variance term (why not?)