google-research / simclr

SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
https://arxiv.org/abs/2006.10029
Apache License 2.0
4.13k stars 627 forks source link

Are data augmentation operations required at the finetuning level? #170

Open ines321 opened 3 years ago

ines321 commented 3 years ago

I do my pretrained network with SimCLR and saved it . In Finetuning, I would like to know, if it is obligatory to do data augmentation operations before fintuning my saved network ? or it is possible to finetuning with data without any augmentation . Thanks

chentingpc commented 3 years ago

we use random crop / flip during fine-tuning, but didn't find color augmentation very useful in fine-tuning.

chentingpc commented 3 years ago

It's an empirical question and you could try both and select whichever is better for you. Sometimes even with the same sizes, random crops could still help.

On Fri, Oct 8, 2021 at 5:37 AM ines321 @.***> wrote:

@chentingpc https://github.com/chentingpc , Thanks for response, in the official paper of SimCLR, they say that because Imagenet images are of different sizes, we apply crop and resize. In my case, I have my dataset images are of same sizes, so can I do only random flip ?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google-research/simclr/issues/170#issuecomment-938498501, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKERULQISTRI2QYUCN2FRTUF23W3ANCNFSM5FN4NXLA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.