Closed samedii closed 4 years ago
An alternative seems to be to use model.save_weights
and:
model.load_weights
Have not confirmed this works correctly but at least some optimizer weights are saved.
@samedii, would you mind sharing your goal of saving and restoring AdversarialRegularization
models?
Given that AdversarialRegularization
is mostly stateless (except the base model and compile options), a workaround could be to save the base model, create a new AdversarialRegularization
from the restored base model, and compile with the same optimizer & loss.
Thanks for your answer! The only "need" is that It is normally a simple way of loading the model to continue training. It was an artifact from my code before adding AdversarialRegularization
. Is your alternative better than the option I outlined above?
Sidenote: As you say, AdversarialRegularization
should be stateless so I will try to recreate my issues with memory usage of AdversarialRegularization
in a simple project as I find it strange that it more than doubles memory usage compared to when I'm not using it.
You can close this if you want, unless you want to use it to track documenting how to save/load a AdversarialRegularization
model or something
After another look, I think save_weights
+load_weight
might be a better fit for continuing training, while saving the base model is an easier path for exporting an inference model.
Closing this now.
Is there another intended way of saving model + optimizer?