songdejia / DFL-CNN

This is a pytorch re-implementation of Learning a Discriminative Filter Bank Within a CNN for Fine-Grained Recognition
MIT License
267 stars 52 forks source link

what are the training hyper parameters to get top1 acc>0.87? #14

Open wjtan99 opened 5 years ago

wjtan99 commented 5 years ago

What are the training hyper parameters to get top1 acc>0.87, like batch, lr strategy? and how many epochs do you run?
I can only get 0.83 on the CUB_200_2001. I divided the data sets to 70% training, 30% validation. Please do not close the issues so fast.
Thanks.

doublemanyu commented 5 years ago

What are the training hyper parameters to get top1 acc>0.87, like batch, lr strategy? and how many epochs do you run? I can only get 0.83 on the CUB_200_2001. I divided the data sets to 70% training, 30% validation. Please do not close the issues so fast. Thanks.

Hi, do you implement the non-random initialization ?

wjtan99 commented 5 years ago

I used https://github.com/Ien001/non-random-initialization-for-DFL-CNN. What accuracy can you get? If you can get >0.85, can you share your training parameters?

wjtan99 commented 5 years ago

@songdejia By running run.sh, I cannot even get acc ~ 0.85 using the split train and test data you posted. What is strategy you used in terms of learning rate and other parameters?

lemonrr commented 4 years ago

What are the training hyper parameters to get top1 acc>0.87, like batch, lr strategy? and how many epochs do you run? I can only get 0.83 on the CUB_200_2001. I divided the data sets to 70% training, 30% validation. Please do not close the issues so fast. Thanks.

Hi, what are you hyper-parameters? I can only get ~79%

aparnaambarapu commented 4 years ago

@lemonrr Hi can you share your weights where you got 79% accuracy. It would be really helpful!!

aparnaambarapu commented 4 years ago

What are the training hyper parameters to get top1 acc>0.87, like batch, lr strategy? and how many epochs do you run? I can only get 0.83 on the CUB_200_2001. I divided the data sets to 70% training, 30% validation. Please do not close the issues so fast. Thanks.

@wjtan99 Can you share the weights and code where you got 83% accuracy? Thanks!!