mlr-org / mlr3tuning

Hyperparameter optimization package of the mlr3 ecosystem
https://mlr3tuning.mlr-org.com/
GNU Lesser General Public License v3.0
54 stars 5 forks source link

TuningInstance design issue #169

Closed berndbischl closed 3 years ago

berndbischl commented 5 years ago

the TuningInstance itself seems pretty good IMHO.

it encodes nicely the "problem" the tuner has to solve, and defines it objective function.

what i dislike a bit (?) / wonder about is the resampling.

isnt this something the tuner should decide for itself?

eg how many iterations it performs for an individual configuration? see racing and hyperband.

inb mlr we always solved this by setting then the resampling to holdout and saying the tuner can then decide to do multiple evals on a point if it wants to?

writing this down it does not seem so bad what we have?

@mllg

?

berndbischl commented 5 years ago

Maybe we can add resampling to the eval function? A similar issue is if a tuner wants to self decide to try out other random splits. Or wants to run a learning curve

be-marc commented 3 years ago

resampling is a parameter of ObjectiveTuning$eval_many() now so that the Tuner can change the resampling. This is implemented through the constants field in bbotk. See TunerIrace and ObjectiveTuning for a working implementation.