automl / HPOBenchExperimentUtils

Experiment code to run large-scale experimente with HPOBench
Apache License 2.0
7 stars 5 forks source link

Unify fabolas #16

Closed NeoChaos12 closed 3 years ago

NeoChaos12 commented 3 years ago

This update contains a number of changes as per my discussion with @KEggensperger on Friday. Most notably, while working on the aforementioned changes, I realized that the older implementation was susceptible to an inconsistent sampling strategy for the fidelity values, since Fabolas implemented log space sampling of continuous fidelity parameters whereas MUMBO did not place any such restrictions. The revamped code uses almost identical, custom boiler-plate code for Fabolas in both cases, thus exposing the choice of acquisition function as mtbo (the default used by Fabolas) and mumbo. This update also means that, except for acquisition specific hyper-parameters, all hyper-parameters for Fabolas can now be consistently tweaked across experiments with either acquisition function.

The only open question is this: How do we match the initial sampling of fidelities and configurations? The original Fabolas code distributed the sampled initial configurations uniformly amongst the available dataset sizes whereas the MUMBO paper adopted different approaches based on the particular experiment - for MTBO vs MUMBO comparisons (B.1 in the paper), they only used synthetic objectives and checked every sample configuration at every fidelity value whereas in B.2, vs Fabolas, they chose a specific subset of fidelities and checked every sample configuration at each of those fidelity values only. For now, the implementation uses the same strategy as was used by the original Fabolas code.