I used to believe in k-way-n-shot few-shot learning, k and n (number of classes and samples from each class respectively) must be the same in train and test phases. But you uses different numbers in the train and test phase (60 for train and 5 for test):
Episode composition A straightforward way to construct episodes, used in Vinyals et al. [29] and
Ravi and Larochelle [22], is to choose Nc classes and NS support points per class in order to match
the expected situation at test-time. That is, if we expect at test-time to perform 5-way classification
and 1-shot learning, then training episodes could be comprised of Nc = 5, NS = 1. We have found,
however, that it can be extremely beneficial to train with a higher Nc, or “way”, than will be used
at test-time. In our experiments, we tune the training Nc on a held-out validation set. Another
consideration is whether to match NS, or “shot”, at train and test-time. For prototypical networks,
we found that it is usually best to train and test with the same “shot” number.
I used to believe in k-way-n-shot few-shot learning, k and n (number of classes and samples from each class respectively) must be the same in train and test phases. But you uses different numbers in the train and test phase (60 for train and 5 for test):
Are we allowed to do so?