mlr-org / mlrMBO

Toolbox for Bayesian Optimization and Model-Based Optimization in R
https://mlrmbo.mlr-org.com
Other
187 stars 47 forks source link

mbo estimate with kriging is constant at some iterations #258

Open verenamayer opened 7 years ago

verenamayer commented 7 years ago

Sometimes it happens that the kriging estimate is constant on the parameter space. Therefore also the infillcrit and se estimate is constant. Probably, thats not good. Here a small example:

alpine = makeAlpine01Function(2)

lrn = makeLearner("regr.km", predict.type = "se")

ctrl = makeMBOControl()
ctrl = setMBOControlTermination(ctrl, iters = 5)
ctrl = setMBOControlInfill(ctrl, crit = "ei", 
                           opt = focussearch, 
                           opt.focussearch.maxit = 2, 
                           opt.focussearch.points = 10)

set.seed(11)
initdes = generateDesign(par.set = getParamSet(alpine), n = 10)

run = exampleRun(fun = alpine, 
                 design = initdes, 
                 learner = lrn, 
                 control = ctrl, 
                 points.per.dim = 50L, 
                 show.info = TRUE)

plotExampleRun(run)
berndbischl commented 7 years ago

some comments from my side

jakob-r commented 7 years ago

first order of business should be to detect this in mlrMBO. this is simple. either do this model agnostically

This is already implemented in branch:smart_scheduling

berndbischl commented 7 years ago

This is already implemented in branch:smart_scheduling

thats good. can we please try to extract these very useful things from the branch and merge them into master a bit sooner? this would also reduce the horrible problem of reviewing a very "rich" branch at the end.

ja-thomas commented 7 years ago

So it does not even seem to be a problem with DiceKriging, GPfit also creates constant predictions (with or without nugget effect).

jakob-r commented 7 years ago

Good to know! Probably they run in to the same numerical problems?

ja-thomas commented 7 years ago

constant

here is a small simulation I ran. It seems really strange that when we add a nugget effect (10^-3) the model will be constant in more situations...

ja-thomas commented 7 years ago

objective

and here are the objective values

ja-thomas commented 7 years ago

Ok some more insights:

jakob-r commented 7 years ago

Can you put the script in a gist?

ja-thomas commented 7 years ago

https://gist.github.com/ja-thomas/6e12b4d58ddefddaa9626631e1e8cebd

jakobbossek commented 7 years ago

Thx @ja-thomas

danielhorn commented 7 years ago

I also had a look at this problem some months ago and my conclusion was: DiceKriging calculates the parameters of the Kriging model via numerical optimization. This numerical optimization can fail, i.e. the optimization only finds a local optima, and not the global best parameters. Since the internal optimization of DiceKriging is stochastic, simply fitting the model again can return the "optimal", non-constant model.

However, I only looked at some small examples (something like initial designs of size 4) and I don't know if this explanation extends to other cases.

ja-thomas commented 7 years ago

I tested three more settings. km with BFGS, km with three restarts and BFGS and km with rgenoud

fraction_constant res

# A tibble: 3 × 2
    algorithm mean_runtime
       <fctr>        <dbl>
1          km     11.36650
2 km_restarts     13.78398
3  km_rgenoud     12.27593

The restarts do not seem to help but rgenoud reduced the number of times the model is constant while the objective values seem similar for all methods. rgenoud is also slightly slower.

We could think about switching to rgenoud for the default optimizer, but as reducing the number of times the model is getting constant does not really seem to improve the performance I'm not sure if we really need to do that.