floodsung / LearningToCompare_FSL

PyTorch code for CVPR 2018 paper: Learning to Compare: Relation Network for Few-Shot Learning (Few-Shot Learning part)
MIT License
1.04k stars 268 forks source link

Select the model based on testing accuracy? #15

Open PatrickZH opened 5 years ago

PatrickZH commented 5 years ago

Thank you for providing the code! I have a concern about the model selection in your miniimagenet_train_few_shot.py. Line 260: It seems that the best training model is selected as the one with best testing accuracy (not validation accuracy) ?

ehsanmok commented 5 years ago

It's based on meta validation set: See 1, 2, 3

flexibility2 commented 5 years ago

@ehsanmok ,hi, however ,in the https://github.com/floodsung/LearningToCompare_FSL/blob/master/miniimagenet/miniimagenet_train_few_shot.py#L15 you just use the "task_generator_test", not the "task_generator"……

ehsanmok commented 5 years ago

Right! not a good code. It's the third mistake along with not using model.eval() and using the same normalization for omniglot and mini-imagenet!

ehsanmok commented 5 years ago

However, it's done correctly for one shot here. Based on the copy pasting attitude of the code maybe it was changed at the time of training and when released the code wasn't carefully done!

xyxxmb commented 5 years ago

@ehsanmok Hello! I think that although the code uses "task_generator_test" instead of "task_generator" in miniimagenet_train_few_shot.py, it doesn't influence the result of model training because "metatest_folders" is only used for monitoring generalization performance. It doesn't participate in the process of model training.

hyyuan123 commented 5 years ago

I would like to ask, is there a problem with the model selection based on omniglot? Should be based on the accuracy of training to choose it

hyyuan123 commented 5 years ago

When selecting a model, the test data is unknown, so the accuracy of the test cannot be used to select the model.