The library currently only saves the collected points and scores and has to retrain the model on resume. That is time-consuming and can cause hyperparameters to shift after resume until the Markov chain has settled into the typical set again.
Pickling the model using dill works and it looks like the way to go.
A little bit of care should be taken with respect to changed parameters: if the user wants to change parameter ranges, then a fast resume is not possible. If the acquisition function is changed, this needs to be detected and changed on the loaded optimizer.
The library currently only saves the collected points and scores and has to retrain the model on resume. That is time-consuming and can cause hyperparameters to shift after resume until the Markov chain has settled into the typical set again.
Pickling the model using
dill
works and it looks like the way to go. A little bit of care should be taken with respect to changed parameters: if the user wants to change parameter ranges, then a fast resume is not possible. If the acquisition function is changed, this needs to be detected and changed on the loaded optimizer.