facebookresearch / fewshotDatasetDesign

The paper studies the problem of learning to recognize a new class of objects from a very small number of labeled images. This is called few-shot learning. Previous work in the literature focused on designing new algorithms that allow to learn to generalize to new unseen classes.In this work, we consider the impact of the dataset that we train on, and experiment with some dataset manipulations to see which trade-offs are important in the design of a dataset aimed at few-shot learning.
Other
25 stars 6 forks source link

Some problems about datasets! Thanks. #1

Open indussky8 opened 4 years ago

indussky8 commented 4 years ago

Hello, Othman! I am reproducing your code, but I met some problems. First, is it right in line 19-20 of datasets.py? I have not found miniINTools folder. Second, I have not found datasets01_101 folder in creat_IN6K_dataset.ipynb. Meanwhile, I an confused with the creation of IN6k datastet. Could you explain it again? Looking forward to your reply. Thanks!

sbaio commented 3 years ago

Hello Xingxing,

For your first question, I have updated the instructions for downloading the miniImagenet dataset.

For your second question, I provided the notebook create_IN6k_dataset.ipynb from IN 22k classes. You can read about the steps I followed to create IN6k. But you can also use the filenames in IN6k.json and IN6k_cub.json.

Thank you for your efforts

indussky8 commented 3 years ago

Thank you for your kind reply. I have another high-level question about the problem solved in this paper.

Your selection depends on the test data. So, we need to perform selection again if the test task changes. Is it realistic or valuable in practice? I mentioned that there have been several works about transductive FSL. Could I say that the method in your work also belongs to transductive FSL?

Thanks!

sbaio commented 3 years ago

I would not say that the paper deals with transductive few-shot. In fact, we explore patterns of how training data selection affects test performance and see how they generalize across different benchmarks. Of course if you finetune your performance specifically on a given benchmark, it would be transductive. That's not our aim!

indussky8 commented 3 years ago

So, the test data are only used during training data selection. When performing few-shot classification using ProtoNet or MatchingNet method, the test data are not used to finetune the parameters. Then it is not transductive. Is it right? If so, when the test data change, do we need to design a training set again? Thanks!