google-research / simclr

SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
https://arxiv.org/abs/2006.10029
Apache License 2.0
4.05k stars 624 forks source link

we get 84% top-1 in depth18 use simclr2 when Finetuning #47

Closed feifaxiaoming closed 4 years ago

feifaxiaoming commented 4 years ago

i use cifar10 to train simclrv2 model and depth set 18 , when i trained 1000 epoch ,i finetune my model ,i get 84% top-1 ,its loss in simclrv1 91% top-1 ,it correct?

chentingpc commented 4 years ago

we made some changes to the simclrv1 codebase in 244e7128004c5fd3c7805cf3135c79baa6c3bb96, which has not been tested under CIFAR10 and may lead to this discrepancy. Can you try if the same run (same configuration etc.) with the codebase before the change would produce the same result?

feifaxiaoming commented 4 years ago

yes ,i use simclrv1 test is correct with your test result is 91% , i use simclrv2 test is loss than simclrv1 use you inprove code.

chentingpc commented 4 years ago

Did you set --ft_proj_selector=0 when you fine-tuned the linear head (this has been set as default in newest code) when you use simclrv2? I don't think there should be a big difference between simclrv1 or simclrv2 if you're using the default architecture etc.

feifaxiaoming commented 4 years ago

i use you inprove code ,i don not change the code.and test on simclrv2

chentingpc commented 4 years ago

I think the issue has been fixed now (the ft_proj_selector is set to 0 by default now). let me know if you still have the same issue.

chentingpc commented 4 years ago

close it for now. please reopen if the issue persists