Open sooonwoo opened 3 years ago
Hi, @sooonwoo ,
Are you comparing the two augmentations both using the codebase here? Or you compare results using RandAug with this codebase against the original SimCLR reported in their paper?
Thanks for the quick reply.
The former. I want to compare the two augmentations using your codebase on Renset18 & CIFAR10. Below is the experimented result. (RandAug(N=2, M=8) is used for the experiments, I have not experimented on the SupCon so far)
SimCLR 1) Contrastive Augmentation: SimAug | Linear classifier Augmentation: SimAug | ACC: 90.3% 2) Contrastive Augmentation: RandAug | Linear classifier Augmentation: RandAug | ACC: 78.5% 3) Contrastive Augmentation: RandAug | Linear classifier Augmentation: SimAug | ACC: 87.7%
The result is contracted with the result in table7(SupCon).
Have you tried the experiment for SimCLR or SupCon with small setting(model, dataset) as well?
@sooonwoo , CIFAR is a bit different from ImageNet. On CIFAR, simpler augmentation is better but on ImageNet RandAugment should be better.
Got it. Thanks for your help :)
Hi, thanks for your impressive work!
I found the ablation study about augmentation in your paper(appendix table 7).
I just wonder SimCLR(not SupCon) with other augmentations(e.g. RandAug) also works well. As far as I've experimented, the result with RandAug is way worse than one with the original SimCLR aug. I am not sure my hyperparameters for the RandAug are proper though.(I used N=2, M=8/15 for RandAug(N, M))