Closed xmengli closed 5 years ago
Thank you for your interest in our code base. I think that is a reasonable variance. Each meta-training run can yield different result. I experienced similar issues with many other few-shot learning algorithms. Also, different versions of packages may cause different behaviors (e.g. Python 3 might use different random seed than Python 2).
I understand. Great work!
I understand. Great work!
It means that you have solved your problem. Do you try SVM on miniImageNet without label smooth? I get 60.57 ± 0.44 (1shot) and 77.44 ± 0.33 (5shot) both with and without label smooth. I feel the result of 60.57 ± 0.44 (1shot) is too much different from that in the paper. Have you tried it? I really hard to find the problem.
If you can help me, I will be very grateful!
Hi, many thanks for the sharing of the code. I reproduced the results on miniImagenet, under python=3.7.2 with following packages.
I trained with only train dataset, and the best model is selected on the validation set. I tested the best model on the test data, the results are: Accuracy: (1-shot) 61.34 ± 0.65 --- your paper: ( 62.64 ± 0.61 ) (5-shot) 77.95 ± 0.47 -- your paper: (78.63 ± 0.46)
I am wondering if the difference is reasonable in this task? I am also wondering if the difference is due to my running environments.
Many thanks!!