dragen1860 / MAML-Pytorch

Elegant PyTorch implementation of paper Model-Agnostic Meta-Learning (MAML)
MIT License
2.33k stars 422 forks source link

2nd Order or 1st Order Approximation? #32

Open Vampire-Vx opened 5 years ago

Vampire-Vx commented 5 years ago

Is this implementation a 1st order Approximation version of Maml ? In meta.py, when you do autograd.grad, you do not specify create_graph = True, which means that the gradient operation would not be included in the computation graph.

Thus, although the design here is trying to calculate the 2nd order derivatives, the grad is not included, so only 1st order approximation.

yinxiaojian commented 5 years ago

I think this implementation is only for first order version of MAML. If for second version, need to set retain_graph = True, create_graph=True when calculating "torch.autograd.grad"

MrDavidG commented 4 years ago

@Vampire-Vx @yinxiaojian
I have also tried to set retain_graph = True, create_graph=True, but for mini-Imagenet, the performance is weaker than before. Besides, the hidden dimensions I used for mini-imagent is [32,32,32,32] rather than 64, which is the same with the setting in the original MAML paper