AntreasAntoniou / HowToTrainYourMAMLPytorch

The original code for the paper "How to train your MAML" along with a replication of the original "Model Agnostic Meta Learning" (MAML) paper in Pytorch.
https://arxiv.org/abs/1810.09502
Other
773 stars 137 forks source link

While saving model, save optimizer state dict as well #31

Closed pandeydeep9 closed 4 years ago

pandeydeep9 commented 4 years ago

Hi there,

Thanks for the great code.

I was experimenting with checkpoints/resuming experiments from specified epoch for maml experiments and noticed that the optimizer state dict is not saved.

I think since outer loop uses adam optimizer we should save the optimizer state dict along with the model state dict i.e. in few_shot_learning_system.py save_model, adding state['optimizer'] = self.optimizer.state_dict() and in load_model adding self.optimizer.load_state_dict(state['optimizer']).

Let me know if this does not make sense.

AntreasAntoniou commented 4 years ago

Makes perfect sense and yes you are right. I noticed it some time ago and forgot to update the code. Feel free to push a PR. I'll review it.

On Fri, 17 Jul 2020, 01:51 Deep Shankar Pandey, notifications@github.com wrote:

Hi there,

Thanks for the great code.

I was experimenting with checkpoints/resuming experiments from specified epoch for maml experiments and noticed that the optimizer state dict is not saved.

I think since outer loop uses adam optimizer we should save the optimizer state dict along with the model state dict i.e. in few_shot_learning_system.py save_model, adding state['optimizer'] = self.optimizer.state_dict() and in load_model adding self.optimizer.load_state_dict(state['optimizer']).

Let me know if this does not make sense.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/AntreasAntoniou/HowToTrainYourMAMLPytorch/issues/31, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACSK4NX52N3EZZ27AMLKNFDR36OBFANCNFSM4O5HKCHQ .