Closed priyamdey closed 3 years ago
Hi @priyamdey,
src/snn_fine_tune.py
and evaluated with snn_eval.py
Ah ok. Thanks for the clarification!
Sorry, maybe I got a bit confused here. In the appendix of the paper, under the section of "Fine-tuning details", the following is mentioned: Following [1], we fine-tune a linear classifier from the first layer of the projection head in the pretrained encoder fθ, and initialize the weights of the linear classifier to zero. Specifically, we simultaneously fine-tune the encoder/classifier weights by optimizing a supervised cross-entropy loss on the small set of available labeled samples.
As you mentioned out that for fine-tuning, src/snn_fine_tune.py
script is used. However, I see suncet loss and 3-layer MLP as the projection head being used in that script. src/fine_tune.py
has the right loss (CE) and MLP (1-layer followed by linear clf) based on what is mentioned in the appendix.
Yes that's true for ImageNet. But for CIFAR10 we just do nearest-neighbours classification/fine-tuning (see appendix C)
Oh I see. I missed that. Now it's clear. Thank you for pointing that out!
Hi! I was trying to finetune a pretrained model for cifar10 subsets. Regarding this, I have got 2 questions:
Thanks!