facebookresearch / msn

Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)
Other
451 stars 33 forks source link

About the 1% In1k semi-sup evaluation #9

Closed merlinarer closed 2 years ago

merlinarer commented 2 years ago

Hello, thanks for your sharing, I was littile confused about your 1% In1k semi-sup evaluation. You said in paper that the results come from logistic regression on the extracted representations. However, with the same ViT, I found this evaluation of iBoT come from end2end full fintuning(see here), and SwAV et. all fintuned the entire res50 encoder.

MidoAssran commented 2 years ago

Hi @merlinarer,

Thanks for your message. Yes, one common evaluation is end-to-end fine-tuning with 100% labels. However, with 1% labels, iBOT achieves the best performance with logistic regression on the extracted (frozen) representations.

See Table 12 in their paper comparing fine-tuning to linear probing on 1% labels.