torralba-lab / im2recipe-Pytorch

im2recipe Pytorch implementation
MIT License
271 stars 100 forks source link

Training strategy doesn't lead to a decreasing loss #24

Closed avijit2verma closed 5 years ago

avijit2verma commented 5 years ago

The training strategy, specifically the random creation of negative samples at each epoch just doesn't seem to work. The loss doesn't decrease even after 100+ epochs and the performance on validation set remains poor even after this much training. It would be such a huge help if you could post a guideline on how the loss function should behave or how the loss curves should look like for this implementation.

Thanks, Avijit

jmarintur commented 5 years ago

Hi @avijit2verma, did you try to train it again with the latest version of the code? Are you using the default arguments?

avijit2verma commented 5 years ago

Hi @jmarintur, I did train it with the latest version and was able to see good training performance. Still not sure what was going wrong earlier though.