Open Blue-Clean opened 5 years ago
Hey, to apply PC simply use --confusion with the loss layer. I'd suggest working with a smaller value of the regularization. If it still is ineffective, please let me know your solver and model prototxts.
Thanks for your reply. I made a mistake about my net.prototxt and it isn't in oscillation anymore . However, it seems that PC doesn't work for my net and the accuracy didn't make nay improvement but decreased about 1.2% . I add pairwise to my network just like --confusion '{"fc8_cub200" : 20}',where fc8_cub200 is the output of innerproduct. I read the source code you released, the top of fc8_cub200 will be forward to a softmax and then after some operations the pairwise confusion will be generated and added into original loss? U just said I'd suggest working with a smaller value of the regularization, but you advised the range of the lamda is 0.01N to 0.15N. Can you give me any advice? Thank you so much.
Hi,@abhimanyudubey, Thanks for you share, I met some problems when I add pairwise to my network. --confusion '{"fc8_cub200" : 20}' is the parameter I set, where fc8_cub200 is the output of innerproduct. However, my train_loss is in oscillation. I am confused about the parameter --normalize,--agnostic,--entropic. In conclusion, (1) If I want to apply PC to my own net, True or False should above they be? (2) in add_simplex function of train.py, which code part shows you get the Euclidean Confusion? maybe the simplex7? Look forward to UR reply?