keyu-tian / SparK

[ICLR'23 Spotlight🔥] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling"
https://arxiv.org/abs/2301.03580
MIT License
1.41k stars 82 forks source link

Contrastive learning methods performance in paper #47

Closed Vickeyhw closed 11 months ago

Vickeyhw commented 1 year ago

Thanks for your great work! Does these contrastive learning methods' performance on ImageNet refer to finetuning results? If so, since those papers only report linear evaluation results, where did you get the finetuning score?

keyu-tian commented 1 year ago

Yes we use the codebase of "ResNet Strikes Back" to finetune CL pretrained weights. We use the RSB A2 configuration.