slds-lmu / paper_2023_survival_benchmark

Benchmark for Burk et al. (2024)
https://projects.lukasburk.de/survival_benchmark/
GNU General Public License v3.0
4 stars 0 forks source link

Readjusting tuning and evaluation procedure #11

Open jemus42 opened 4 days ago

jemus42 commented 4 days ago

Moving to MBO rather than random search is straight forward and done in #8 already, but that also brings with it recosinderation of the tuning budget. The current budget is 50 * n_hyperparams, which scales from 50 to 400 (for XGBAFT), or more likely 350 because nrounds is now tuned internally via early stopping.

For the inner resampling strategy:

For reasonably sized tasks and fast-ish learners this should only help, but for the large/slow cases this is going to cause us to run into timeouts.

For the outer resampling:

Scaling the outer resampling has the largest effect on runtime as tuning of course scales with that, and it affects the number of compute jobs on the cluster (one per outer iteration). I'll need to so some reasonable runtime testing to get a grip here, but I'd like to avoid massively over- or undershooting what we could/should have done.