Closed peiswang closed 1 year ago
Hello,
This is because ViT backbones work better with 224x224. With this change, TBH it becomes tricky to compare applies to bananas so it would be better to redo ResNet mini-Imagenet experiments with 224x224. We recommend to use our PMF setting as a new setting (i.e., introducing self-supervised pre-training) to few-shot learning and only compare methods in this new setting.
Got it. Thank you very much for your reply.
Hi, Thanks for your greate work! It seems that the defualt input image size for mini-imagenet in the released code is 224x224. Are the results of the paper all use 224x224 as input for mini-imagenet? Thanks!