hirokatsukataoka16 / FractalDB-Pretrained-ResNet-PyTorch

Pre-training without Natural Images (ACCV 2020 Best Paper Honorable Mention Award)
MIT License
205 stars 27 forks source link

Finetuning on PASCAL and Omniglot #2

Open greeneggsandyaml opened 3 years ago

greeneggsandyaml commented 3 years ago

Hello @hirokatsukataoka16,

I left this question on #1 , but since that issue is closed I realized you may have missed it. My question is about finetuning -- how did you prepare the PascalVOC and Omniglot datasets? My understanding is that PascalVOC is usually in a multi-class classification setup and Omniglot is usually a few-shot learning setup. Did you make them into single-class classification tasks, and if so how exactly did you go about it? I don't believe there is any information about it in the paper.

Thank you so much again for the paper and repo! It's a really nice idea.

hirokatsukataoka16 commented 3 years ago

Hi @greeneggsandyaml

Sorry for the late reply. I will do my best to answer your question as follows.

Pascal VOC: I guess the pascal voc classification is based on the detection. The bounding boxes are simply cropped as input images and used the paired label. Omniglot: I have implemented train/val setting for classification of 1,623 characters. A category contains around 20 images.