nupurkmr9 / S2M2_fewshot

Other
112 stars 20 forks source link

About the rotation pretraining #23

Open corwinliu9669 opened 2 years ago

corwinliu9669 commented 2 years ago

I am trying to reproduce your results on miniImageNet and tieredImageNet. I can reproduce the result of S2M2 with the given rotation weights on miniImageNet. But when I train the rotation task by myself, the results can not the match the performance of the given rotation weights. I wonder whether you train the rotation with multi GPUs or there are other tricks. And I find that the fc dimension is 200 for miniImageNet, it is weird. I think it should be 64 for miniImageNet. Furthermore I do not find the rotation weight for tieredImageNet, could you kindly release the rotation weights for tieredImageNet?

doris797 commented 2 years ago

Can you tell me how you fine-tune the novel class?

nupurkmr9 commented 2 years ago

Hi, in case miniImageNet and tieredImageNet dataset with rotation self-supervision, we train for 400 and 100 epochs. Batch size is kept to 64 and train_aug flag is enabled during backbone training. While evaluating on novel classes, only a linear network is trained on the backbone features. "Few-shot evaluation" section of the Readme mentions the commands for these. save_features.py saves the features and test.py trains a linear network over these features. Hope this resolves the doubts regarding training and novel class evaluation.

doris797 commented 2 years ago

thank you very much! can i say the classifier for the Novel class is a small sample training but the feature extractor is not...