Open unluckydan opened 7 years ago
Thank you for sharing your results. And I also observed similar results.
One of the purpose of this repository is to compare the performances among various methods in the same condition, rather than to reprecate as possible as their individual results. So there are some difference of configurations and results between the papers and this repository's.
In particular, for all methods the experimental configuration is roughly according to the paper of N-pair loss, i.e. the pre-trained network is BVLC's GoogLeNet and its input size should be 224.
I'm wondering that overall results reported in the paper of Clustering loss are significantly lower than other papers. I suspect one of the causes comes from that they used GoogLeNet with batch normalization (in contrast to BVLC's one does not use BN).
I will try to figure it out, if there is any news I will let you know.
That's grateful. Thank you!
I modified something like dropout and using 227 cropped images, but nothing improves. That is very wired.
Hello, can you share your results? Thanks
Hello, can you share your results? Thanks
So do you get good performs on the three datasets? I got these [best] soft results by main_n_pair_mc.py.
Do you have any idea about this? And I notice one thing, your crop is 224, but these four papers use 227. So I have no idea about what kinds of thing make the difference in CUB200.