abhimanyudubey / confusion

Code for the ECCV 2018 paper "Pairwise Confusion for Fine-Grained Visual Classification"
https://arxiv.org/abs/1705.08016
200 stars 39 forks source link

about train problems #11

Open Blue-Clean opened 5 years ago

Blue-Clean commented 5 years ago

Hi,@abhimanyudubey, Thanks for you share, I met some problems when I add pairwise to my network. --confusion '{"fc8_cub200" : 20}' is the parameter I set, where fc8_cub200 is the output of innerproduct. However, my train_loss is in oscillation. I am confused about the parameter --normalize,--agnostic,--entropic. In conclusion, (1) If I want to apply PC to my own net, True or False should above they be? (2) in add_simplex function of train.py, which code part shows you get the Euclidean Confusion? maybe the simplex7? Look forward to UR reply?

abhimanyudubey commented 5 years ago

Hey, to apply PC simply use --confusion with the loss layer. I'd suggest working with a smaller value of the regularization. If it still is ineffective, please let me know your solver and model prototxts.

Blue-Clean commented 5 years ago

Thanks for your reply. I made a mistake about my net.prototxt and it isn't in oscillation anymore . However, it seems that PC doesn't work for my net and the accuracy didn't make nay improvement but decreased about 1.2% . I add pairwise to my network just like --confusion '{"fc8_cub200" : 20}',where fc8_cub200 is the output of innerproduct. I read the source code you released, the top of fc8_cub200 will be forward to a softmax and then after some operations the pairwise confusion will be generated and added into original loss? U just said I'd suggest working with a smaller value of the regularization, but you advised the range of the lamda is 0.01N to 0.15N. Can you give me any advice? Thank you so much.