mlr-org / mlrMBO

Toolbox for Bayesian Optimization and Model-Based Optimization in R
https://mlrmbo.mlr-org.com
Other
187 stars 47 forks source link

document/develop more ways to control exploration-exploitation tradeoff #450

Open zkurtz opened 5 years ago

zkurtz commented 5 years ago

Here are ways that I see mlrMBO currently offering control over exploration vs exploitation for single-objective tuning:

What other controls exist? Here are some I'd like:

  1. Extend the definition of makeMBOInfillCritEI to accept the cb.lambda parameter too, as a coefficient on the variance term (why not?)
  2. Offer control over the Gaussian process prior of the learner to allow setting a high variance on the prior.
  3. Offer control over the bandwidth of the Gaussian process covariance kernel to be more-or-less permissive of wiggly loss
  4. In case the learner is a random forest, offer controls analogous to (2) and (3).
jakob-r commented 5 years ago

Hi @zkurtz, thanks for your input. I'd like to add:

Regarding your suggestions:

  1. Do you have any reference that says that this is a good idea? I stumbled upon the espsilon value here but I have not found the reference yet.

  2. Do you mean the nugget setting? You can already set that when you define the learner manually. lrn = makeLearner("regr.km", nugget = 0.5)

  3. You can also configure the kernel directly using mlr (see above)

  4. Again, this should be all learner settings.

zkurtz commented 5 years ago

+1 for the adaptive CB feature.

(1) I don't have a reference. (2) Yes nugget looks like the thing to start with.

More generally with (2)-(4) I'm not surprised to hear that these are learner settings. Adding a vignette that highlights how to use these settings to influence the exploration-exploitation trade off for the two default learners would be going above-and-beyond, but I imagine it would be very useful.