orobix / Prototypical-Networks-for-Few-shot-Learning-PyTorch

Implementation of Prototypical Networks for Few Shot Learning (https://arxiv.org/abs/1703.05175) in Pytorch
MIT License
986 stars 210 forks source link

Loss Backpropogation #20

Closed vatsalsaglani closed 4 years ago

vatsalsaglani commented 5 years ago

During the learning stage, the loss isn't backpropagated to the model and I am obtaining the same accuracy and loss even after training for a huge number of epochs.

dnlcrl commented 4 years ago

Hi @vatsalsaglani, it looks like you aren't accumulating the gradients for the loss, unfortunately we cannot reproduce this behavior, I would suggest to check if the model's parameters tensors do require gradients or to make sure that backward() and optim.setp() are effectively called between each iteration.