Open BoyuanJiang opened 7 years ago
Thanks for your review. You did nothing wrong. I am still looking for the reason of this behaviour. I have updated the code to support 5-shot learning with miniImagenet. But still I have low accuracy results with 1 and 5 shot in miniImagenet, with omniglot dataset it works fine. I will look into it as soon as possible. If you find any possible update to the code just let me know.
You can change n_samplesNShot to 15, and change selected_class_meta_test in def creat_episodes to selected_classes, stay consistent with the superparam in meta learning lstm code, you will get the val and test accuracy about 55% with no fce, 5-ways, 5-shots in miniImagenet.
Hi @ZUNJI
Thanks for the tip. :)
But I have a question, in create_episodes routine one class is being used as the 'target class'. If we replace the selected_class_meta_test
to selected_classes
this would remove that one class as target class. I am a bit confused. Can you please elaborate? Thanks. :)
Yes, you are right. This procedure indicates that the 'target class' is same as support class. You can read the paper<
Certainly you should insure that the target images is not in support images. So I change n_samplesNShot to 15 then n_samples is 20. Using 5 appending in support set, the rest appending in target set.
Sorry, it is not param 'n_samples', the param 'number_of_samples' in def create_episodes. for c in selected_classes: number_of_samples = self.samples_per_class
number_of_samples += self.n_samplesNShot
You can change n_samplesNShot to 15, and change selected_class_meta_test in def creat_episodes to selected_classes, stay consistent with the superparam in meta learning lstm code, you will get the val and test accuracy about 55% with no fce, 5-ways, 5-shots in miniImagenet.
The code randomly used 1 class of the support set as query label . So when we set the n_samplesNShot to 15, whether the number of classes in 'selected_class_meta_test" should be changed? And the another question is the more samples in the query set.Its reasonable that the images of each class is still 1 or 5 in support set as few shot,but the more query samples mean we can calculate more pairs of losses, does it violate the standard setting of the few shot learning? In other words, how to determine the number of query sets,Thank you very much!
First very thanks for your implement of Matching-Networks with pytorch. I have follow your setup to run the miniImagenet example,the training accuracy can achieve about 100%,but the val and test accuracy is about 40%.In origin paper it's about 57%.So I wonder if where I'm wrong to run your code or can you tell me your result at miniImagenet?
This is my logs