experimental-design / bofire

Experimental design and (multi-objective) bayesian optimization.
https://experimental-design.github.io/bofire/
BSD 3-Clause "New" or "Revised" License
153 stars 20 forks source link

Implementing checkpointing and recovery #398

Open aangelos28 opened 1 month ago

aangelos28 commented 1 month ago

Is there a built-in way to do checkpointing on the Bayesian optimization using the GP surrogate and later recover its state, if say the application unexpectedly terminates?

One possible way could be to checkpoint the inputs/outputs and then feed all this data into the strategy and retrain the model upon restarting the application, but there is the cost of retraining and would require statically seeding the RNG. Are there any other drawbacks to this?

Alternatively, what else needs to be checkpointed? The BoTorch model?

Thanks!

bertiqwerty commented 1 month ago

Hi there. Currently, you can serialize you strategy into json including your data and re-start from there coming with the drawbacks you mentioned. Everything more efficient than that is currently up to the user. Note that ENTMOOT for instance does not use BoTorch models.

aangelos28 commented 1 month ago

I see. Thanks for the clarification!