mop / bier

Cleaned up reference implementation of BIER: Boosting Independent Embeddings Robustly.
GNU General Public License v3.0
39 stars 15 forks source link

CUB200 recall@1(0.575) #1

Open kebinC opened 6 years ago

kebinC commented 6 years ago

hi, i run your code using CUB200 dataset, but i can't reappear result of recall@1(0.575). Can you give me more detail about configure of parameters, likely training batch size, labels-per-batch? And how do you train, one stage or two stages? Thanks.

chenbinghui1 commented 5 years ago

hi, i run your code using CUB200 dataset, but i can't reappear result of recall@1(0.575). Can you give me more detail about configure of parameters, likely training batch size, labels-per-batch? And how do you train, one stage or two stages? Thanks.

57.5% is ok

asanakoy commented 5 years ago

@kebinC did you solve the issue? What is the highest result you get?

kebinC commented 5 years ago

@kebinC did you solve the issue? What is the highest result you get?

No, the highest recall@1 i got is about 0.51.

LeeRock commented 5 years ago

@kebinC did you solve the issue? What is the highest result you get?

No, the highest recall@1 i got is about 0.51.

hi ,i only get 0.21@1 .How do you adjust the hyper parameter in the original code ?

LeeRock commented 5 years ago

hi, i run your code using CUB200 dataset, but i can't reappear result of recall@1(0.575). Can you give me more detail about configure of parameters, likely training batch size, labels-per-batch? And how do you train, one stage or two stages? Thanks.

57.5% is ok

By running the original repo , I only get 0.21@1 .Would you please give me some suggestion?Thank U.

chenbinghui1 commented 5 years ago

@kebinC First, in line424 of train_bier.py, the axis should be 1, not 0. Have you implemented the baseline results? If you achieve similar baseline results, then you can try `activation', this will achieve R@1=~56.5.