Closed philohistoria closed 4 years ago
Hi @philohistoria, Thanks for your interest in this work. I've never managed to get really good results when training deepcluster on small datasets :/. My 2cents is that (i) training is too unstable for small datasets (re-set the last layer too often). We fix it with DeepCluster-v2 https://github.com/facebookresearch/swav/blob/master/main_deepclusterv2.py. (ii) data augmentation is not strong enough. Overall the performance relies heavily on data augmentation and random resized crops, so if the resolution of your images is too small that might alter the augmentation quality.
Hope that helps
Hi, thanks for the great work!
I have a smaller dataset with 50K images. I used your pretrained VGG on ImageNet to extract features for my 50K dataset, and then cluster these extracted features. It works reasonably good.
My question is that how can I continue refining the learned representations on my own 50K images. I have replaced the training data to my own 50K, and resumed training from your pre-trained VGG model, but the performance quickly deteriorates after several epochs. I guess it's because I need to freeze first several layers, and perhaps only train later layers, in the same way as fine-tuning is used in supervised learning tasks?
Please let me know whether my intuition is correct, and welcome any suggestion on the best way to adapt your pre-trained models on a smaller dataset.