dragen1860 / MAML-Pytorch

Elegant PyTorch implementation of paper Model-Agnostic Meta-Learning (MAML)
MIT License
2.31k stars 420 forks source link

why use custom grad clip function? #57

Open QasimWani opened 3 years ago

QasimWani commented 3 years ago

https://github.com/dragen1860/MAML-Pytorch/blob/98a00d41724c133bd29619a2fb2cc46dd128a368/meta.py#L41

hey, I'm confused why you didn't use the standard grad clipping function nn.utils.clip_grad_norm(parameters, max_value) instead of implementing clip_grad_by_norm_?