The original code for the paper "How to train your MAML" along with a replication of the original "Model Agnostic Meta Learning" (MAML) paper in Pytorch.
Thanks for your excellent work and code. Here is a question about Line 314 in meta_neural_network_architectures.py. Why is self.weight always used even if params is not None?
Good catch. That block is legacy code prior to my mechanism for explicit selection of what parameters should be optimized in the inner loop. It should really be identical to how MetaBatchNorm works.
Hi Antreas,
Thanks for your excellent work and code. Here is a question about Line 314 in meta_neural_network_architectures.py. Why is
self.weight
always used even ifparams is not None
?