AntreasAntoniou / HowToTrainYourMAMLPytorch

The original code for the paper "How to train your MAML" along with a replication of the original "Model Agnostic Meta Learning" (MAML) paper in Pytorch.
https://arxiv.org/abs/1810.09502
Other
759 stars 137 forks source link

About zero_grad() #37

Open whsun21 opened 3 years ago

whsun21 commented 3 years ago

I found the call of self.optimizer.zero_grad() and self.zero_grad() after self.meta_update(loss=losses['loss']), what is the purpose of them? It seems like self.optimizer.zero_grad() was already called in self.meta_update(loss=losses['loss']). As for self.zero_grad(), I couldn't get the aim of it. Could you please explain them? Thanks a lot!