Moving to MBO rather than random search is straight forward and done in #8 already, but that also brings with it recosinderation of the tuning budget.
The current budget is 50 * n_hyperparams, which scales from 50 to 400 (for XGBAFT), or more likely 350 because nrounds is now tuned internally via early stopping.
For the inner resampling strategy:
3-fold CV -> 2 repeats of 3-fold CV?
For reasonably sized tasks and fast-ish learners this should only help, but for the large/slow cases this is going to cause us to run into timeouts.
For the outer resampling:
5-fold CV -> 2 repeats of 3-fold CV?
Scaling the outer resampling has the largest effect on runtime as tuning of course scales with that, and it affects the number of compute jobs on the cluster (one per outer iteration).
I'll need to so some reasonable runtime testing to get a grip here, but I'd like to avoid massively over- or undershooting what we could/should have done.
Moving to MBO rather than random search is straight forward and done in #8 already, but that also brings with it recosinderation of the tuning budget. The current budget is
50 * n_hyperparams
, which scales from 50 to 400 (for XGBAFT), or more likely 350 becausenrounds
is now tuned internally via early stopping.For the inner resampling strategy:
For reasonably sized tasks and fast-ish learners this should only help, but for the large/slow cases this is going to cause us to run into timeouts.
For the outer resampling:
Scaling the outer resampling has the largest effect on runtime as tuning of course scales with that, and it affects the number of compute jobs on the cluster (one per outer iteration). I'll need to so some reasonable runtime testing to get a grip here, but I'd like to avoid massively over- or undershooting what we could/should have done.