Closed LiJunnan1992 closed 3 years ago
Hi, thanks for your interest in our work!
If I remember correctly:
Did you run test_few_shot.py for 10 epochs (10*200 iterations) with the setting above?
Hi, thanks for your reply!
I evaluated classifier-baseline and started meta-baseline from max-va.pth. I ran test_few_shot.py with the default setting. I guess some variance is reasonable. I will try to compare different methods on multiple runs.
Thanks again for the code!
You're welcome.
By default it starts meta-baseline from epoch-last.pth of classifier-baseline (the default setting in train_meta_mini.yaml), you may also try that.
Hello, great work! I want to know why you don't use max-va.pth to evaluate classifier-baseline?
Hello, great work! I want to know why you don't use max-va.pth to evaluate classifier-baseline?
Thanks for the question. If I remember correctly, it is because meta-baseline starts with epoch-last.pth of classifier-baseline (which is slightly better than starting from max-va.pth), while we report epoch-last.pth of classifier-baseline for a clear comparison to see the effect of meta-learning (our main method is meta-baseline). On the other hand, the performance gap between max-va.pth and epoch-last.pth of classifier-baseline is also not very significant.
Thank you very much!
Thank you very much!
You're welcome.
Hi, thanks for the amazing and well-documented code!
I directly ran your code to try to reproduce the results on mini-ImageNet. However, compared to the accuracies reported in the paper, I observed higher performance for classifier-baseline but slightly lower performance for meta-baseline. Here are my results for 2 runs with random seed 1 and 2. I wonder if you have observed similar variance across different runs? Thank you so much for your help!