Closed won-bae closed 3 years ago
Hello won-bae,
As we discussed in our journal paper, we did not split the fully-labeled samples in the conference version. That means, the FSL models have used 100% of the train-fullsup
split for model training. On the other hand, in the journal version, we performed a validate with 20% of train-fullsup
split to tune the hyperparameters of FSL models.
I hope this answered your question.
Oh I am sorry. Missed the journal paper. Thank you for the clarification! Closing the issue.
Dear the authors,
While trying to reproduce the reported results of few shot learning baseline (FSL), I came up with a question. According to the paper, FSL exploited (10, 5, 5) for imagenet, cub and open images, respectively, and the same amount of supervision was applied to CAM methods. For FSL, the numer of samples per class e.g., 10 for imagenet, is the sum of samples for train and val I believe since FSL also need some amount of val set. So my question is that for FSL, how did you split the training and val set among the number of samples per class you specified (10, 5, 5)?
Hope my question is clear to you. Looking forward to hearing from you. Thank you!