songdejia / DFL-CNN

This is a pytorch re-implementation of Learning a Discriminative Filter Bank Within a CNN for Fine-Grained Recognition
MIT License
267 stars 52 forks source link

If you get the best result, I need your help☺️ #8

Open Ien001 opened 5 years ago

Ien001 commented 5 years ago

Thanks for clicking into this issue!

If you get 85.14% as the final accuracy, as reported in the homepage, would you please share the loss value when the program reached the best result, the initial learning rate, and the transform of the data?😊😊

I only get 78.6% at best. Every time I run my program, it seems like I will get random results, ranging from 74.0% ~ 78.6%.

The updates I have done: Resnet-50 based model & no-random initialization

wangkangnian commented 5 years ago

The accuracy seems be limited under 79% whatever i do anything..... And i find mine non-random initialization implementation doesn't work fine, which accuracy is lower than without it. Could you post your implementation of non-random initialization? Besides, what your dataset is? When i use CUB2010+2011, accuracy increases a lot.

guyibang commented 5 years ago

l look forward to the non-random initialization part too

Ien001 commented 5 years ago

@wangkangnian I used CUB-200-2011 dataset. I will post my part of no-random initialization, if it helps.

Ien001 commented 5 years ago

@wangkangnian @guyibang I have posted my part of non-random initialization. I wish it may help you guys a bit.😊

guyibang commented 5 years ago

@Ien001 thanks for your share, i'll tyr for it

chaerlo commented 5 years ago

would you share the non-random initialization part?thank you! @wangkangnian

wjtan99 commented 5 years ago

@Ien001 thanks for your share, I tried it but still could not get top1 acc > 0.85. What tricks did you use in terms of training parameters and lr strategy?

limengyang9368 commented 4 years ago

@Ien001 could you please share you initialization part? it really confused me a couple days...

aparnaambarapu commented 4 years ago

The accuracy seems be limited under 79% whatever i do anything..... And i find mine non-random initialization implementation doesn't work fine, which accuracy is lower than without it. Could you post your implementation of non-random initialization? Besides, what your dataset is? When i use CUB2010+2011, accuracy increases a lot.

Hi @wangkangnian Can you share your weights where you got 79% accuracy? I could only reach 52% at best. If you could not share the weights, could you let me know the changes in code you made w.r.t the repo? Thanks!!