Closed pavelbatyr closed 2 years ago
Hi!
Thanks for your interest in DEHB. The original design of DEHB was designed to interface with black-box functions that come with a specific fidelity value. For a NN it would be the maximum epochs. One way to approach it would be to implement model checkpointing from within the objective_function
definition such that a model when queried for a higher fidelity resumes training based on some file saved to disk. However, DEHB currently doesn't support such a feature implicitly.
However, there is currently an unmerged PR #6 that might be of interest to you! (Unfortunately, I cannot commit to a time when this PR will be merged but user feedback on the PR might expedite the merge ;) )
Thanks a lot for the detailed answer!
Hello! I'm interested in using DEHB for HPO of neural networks. But I couldn't find any code related to model checkpointing. Does training for every budget start from scratch?