chullhwan-song / Reading-Paper

152 stars 27 forks source link

What makes ImageNet good for transfer learning? #104

Open chullhwan-song opened 5 years ago

chullhwan-song commented 5 years ago

https://arxiv.org/abs/1608.08614 https://github.com/minyoungg/wmigftl

chullhwan-song commented 5 years ago

what ? transfer learning (finetuning) 했을때 얼마나 좋아지는 기준을 찾아보자란게 취지

3. How important is fine-grained recognition for learning good features for transfer learning?

4. Does pre-training on coarse classes produce features capable of fine-grained recognition (and vice versa) on ImageNet itself?

5. Given the same budget of pre-training images, should we have more classes or more images per class? > 클래스가 많은게 좋은것인가??

image

6. Is more data always helpful?

image

결론

● We know that we should use at least 500k images and at least 127 classes ● It will probably work well to skip unrelated classes. ● We also know that labeled pretraining seems to outperform other methods.

정리