mlr-org / mlr3tuning

Hyperparameter optimization package of the mlr3 ecosystem
https://mlr3tuning.mlr-org.com/
GNU Lesser General Public License v3.0
53 stars 5 forks source link

disable timeout for final autotuner model train #284

Closed mb706 closed 3 years ago

mb706 commented 3 years ago

Problem: Autotuner tunes a model using resampling, e.g. holdout resampling. In this resampling, the $train call always sees datasets that are smaller than the full dataset, e.g. by 33% in 2/3 holdout. Now if the optimization algorithm found a configuration that trains on the holdout train set in the given time limit, the final train on the whole dataset may be over the timelimit (because now there is more training data).

We really don't need to have a timelimit for the last full-dataset-$train-call. If the user wants the autotuner itself to have bounded runtime, then he can set the autotuner's timeout to some finite value.

berndbischl commented 3 years ago

please never merge this, if such a change is uncommented in code, an undocumented in API

berndbischl commented 3 years ago

martin what you said makes sense to me. if you add comments / docs i think we should merge this.

a-hanf commented 3 years ago

Friendly ping on this. I'd like to use this in mlr3automl, but it is not compatible with the current version of mlr3hyperband any more

be-marc commented 3 years ago

Documentation added.