clovaai / wsolevaluation

Evaluating Weakly Supervised Object Localization Methods Right (CVPR 2020)
MIT License
332 stars 55 forks source link

Data split for FSL #41

Closed won-bae closed 3 years ago

won-bae commented 3 years ago

Dear the authors,

While trying to reproduce the reported results of few shot learning baseline (FSL), I came up with a question. According to the paper, FSL exploited (10, 5, 5) for imagenet, cub and open images, respectively, and the same amount of supervision was applied to CAM methods. For FSL, the numer of samples per class e.g., 10 for imagenet, is the sum of samples for train and val I believe since FSL also need some amount of val set. So my question is that for FSL, how did you split the training and val set among the number of samples per class you specified (10, 5, 5)?

Hope my question is clear to you. Looking forward to hearing from you. Thank you!

junsukchoe commented 3 years ago

Hello won-bae,

As we discussed in our journal paper, we did not split the fully-labeled samples in the conference version. That means, the FSL models have used 100% of the train-fullsup split for model training. On the other hand, in the journal version, we performed a validate with 20% of train-fullsup split to tune the hyperparameters of FSL models.

I hope this answered your question.

won-bae commented 3 years ago

Oh I am sorry. Missed the journal paper. Thank you for the clarification! Closing the issue.