PKU-DAIR / open-box

Generalized and Efficient Blackbox Optimization System
https://open-box.readthedocs.io
Other
356 stars 52 forks source link

Docs & Examples for Multi-Fidelity / Early Stopping #51

Closed bbudescu closed 4 months ago

bbudescu commented 1 year ago

E.g., is the resource dimension (e.g., the number of epochs or the number of samples to train) treated as just another hyperparameter? How can one do cross-task transfer learning with multi-fidelity, e.g., how does one report xval accuracy at every epoch for previous trials?

bbudescu commented 1 year ago

Sorry for so many questions over such little time. I'm trying to decide which optimization package to use for a project, and, for that, I'm checking the support for the features I need / would like.

jhj0411jhj commented 1 year ago

In principle, OpenBox is designed to be a black-box optimization service. For multi-fidelity optimization that treats the problem as "grey-box", we provide algorithms implemented based on OpenBox basic components, including Hyperband, BOHB, MFES-HB, ASHA, etc. Please refer to openbox/apps/multi_fidelity/ for more details. For more advanced multi-fidelity algorithm that aims at large-scale parallel optimization and combines ASHA, MFES and other strategies, you can have a try on HyperTune: https://github.com/PKU-DAIR/HyperTune .

The resource dimension is not simply treated as another hyperparamter in our design. In MFES-HB and HyperTune, an ensemble model is fitted with observations (evaluation results) of different resource levels.

Cross-task transfer learning with multi-fidelity is not provided as an open source part in OpenBox at present. To support this feature, you may need to make some modifications to the existing code.

jhj0411jhj commented 1 year ago

For early stopping, user may implement the strategy during optimization via the ask-and-tell interface. We will soon update the API of Optimizer to support user-defined early stoping functions.