nayeemrizve / invariance-equivariance

"Exploring Complementary Strengths of Invariant and Equivariant Representations for Few-Shot Learning" by Mamshad Nayeem Rizve, Salman Khan, Fahad Shahbaz Khan, Mubarak Shah (CVPR 2021)
MIT License
37 stars 12 forks source link

Could you share the pretrained model on miniImageNet? #4

Closed yinguoweiOvO closed 3 years ago

yinguoweiOvO commented 3 years ago

Hello! Thank you for sharing codes.

I've reproduced results for CIFAR-FS and FC100, but I failed when I tried to reproduce the paper accuracies on miniImageNet. Could you share the pretrained model on miniImageNet and tieredImageNet ? or give some tips? Thanks

Best, YGW

nayeemrizve commented 3 years ago

Hello YGW,

Thank you for expressing interest in our work and I am extremely sorry for the late response. I have been busy with work recently and forgot to respond to this before. For now, can you please try a batch size of 56 for the miniImageNet and tieredImageNet experiments? I will try to share the pretrained weights that I used during the submission or retrain and share the weights by end of next week.

On Fri, Aug 13, 2021 at 5:34 AM VVVVVVY @.***> wrote:

Hello! Thank you for sharing codes.

I've reproduced results for CIFAR-FS and FC100, but I failed when I tried to reproduce the paper accuracies on miniImageNet. Could you share the pretrained model on miniImageNet and tieredImageNet ? or give some tips? Thanks

Best, YGW

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/nayeemrizve/invariance-equivariance/issues/4, or unsubscribe https://github.com/notifications/unsubscribe-auth/AH6EAIBW7SST7K2GRAOYO2DT4TRJ5ANCNFSM5CDHKGLA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

yinguoweiOvO commented 3 years ago

Hello YGW, Thank you for expressing interest in our work and I am extremely sorry for the late response. I have been busy with work recently and forgot to respond to this before. For now, can you please try a batch size of 56 for the miniImageNet and tieredImageNet experiments? I will try to share the pretrained weights that I used during the submission or retrain and share the weights by end of next week. Best Regards, Mamshad Nayeem Rizve

Thanks for your reply and I'm sorry for the late response. I think batch size has an impact on results because of the contrastive learning part. I tried to set the batch size to 32 and have reproduced the result on miniImageNet. But a large batch size cannot be set due to equipment problems. Do you have any good suggestions on how to modify the batch size without affecting the results of the experiment?

Best wishes to you, YGW

nayeemrizve commented 3 years ago

I haven't tweaked the batch size parameter for experiments. Because of the memory constraint, I mainly used a batch size of 52/56 for the miniImageNet and tieredImageNet experiments. Now I see that batch size has some impact on results because of the contrastive loss. I guess to make the performance less sensitive to the batch size parameter non-contrastive instance classification losses (BYOL, SimSiam etc) can be used. But since I haven't experimented with these losses I am not sure what will be the overall performance.

zhouchunpong commented 2 years ago

Hello! Thank you for sharing codes.

I've reproduced results for CIFAR-FS and FC100, but I failed when I tried to reproduce the paper accuracies on miniImageNet. Could you share the pretrained model on miniImageNet and tieredImageNet ? or give some tips? Thanks

Best, YGW

Hello, I use the hyperparamenters in README.md and cannot reproduced results for CIFAR-FS, do you change the hyperparamenters in your reproduced results ?

Best and Thanks

yinguoweiOvO commented 2 years ago

Hello! Thank you for sharing codes. I've reproduced results for CIFAR-FS and FC100, but I failed when I tried to reproduce the paper accuracies on miniImageNet. Could you share the pretrained model on miniImageNet and tieredImageNet ? or give some tips? Thanks Best, YGW

Hello, I use the hyperparamenters in README.md and cannot reproduced results for CIFAR-FS, do you change the hyperparamenters in your reproduced results ?

Best and Thanks

I didn't change the hyperparameters. Maybe you should set the batch size to 64 or evaluate the model multiple times.

zhouchunpong commented 2 years ago

Hello! Thank you for sharing codes. I've reproduced results for CIFAR-FS and FC100, but I failed when I tried to reproduce the paper accuracies on miniImageNet. Could you share the pretrained model on miniImageNet and tieredImageNet ? or give some tips? Thanks Best, YGW

Hello, I use the hyperparamenters in README.md and cannot reproduced results for CIFAR-FS, do you change the hyperparamenters in your reproduced results ? Best and Thanks

I didn't change the hyperparameters. Maybe you should set the batch size to 64 or evaluate the model multiple times. Got it and Thank you very much ! Best Wishes !