HobbitLong / SupContrast

PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)
BSD 2-Clause "Simplified" License
3.13k stars 536 forks source link

Result by using other augmentations(e.g. RandAug) #59

Open sooonwoo opened 3 years ago

sooonwoo commented 3 years ago

Hi, thanks for your impressive work!

I found the ablation study about augmentation in your paper(appendix table 7).

I just wonder SimCLR(not SupCon) with other augmentations(e.g. RandAug) also works well. As far as I've experimented, the result with RandAug is way worse than one with the original SimCLR aug. I am not sure my hyperparameters for the RandAug are proper though.(I used N=2, M=8/15 for RandAug(N, M))

HobbitLong commented 3 years ago

Hi, @sooonwoo ,

Are you comparing the two augmentations both using the codebase here? Or you compare results using RandAug with this codebase against the original SimCLR reported in their paper?

sooonwoo commented 3 years ago

Thanks for the quick reply.

The former. I want to compare the two augmentations using your codebase on Renset18 & CIFAR10. Below is the experimented result. (RandAug(N=2, M=8) is used for the experiments, I have not experimented on the SupCon so far)

SimCLR 1) Contrastive Augmentation: SimAug | Linear classifier Augmentation: SimAug | ACC: 90.3% 2) Contrastive Augmentation: RandAug | Linear classifier Augmentation: RandAug | ACC: 78.5% 3) Contrastive Augmentation: RandAug | Linear classifier Augmentation: SimAug | ACC: 87.7%

The result is contracted with the result in table7(SupCon).

Have you tried the experiment for SimCLR or SupCon with small setting(model, dataset) as well?

HobbitLong commented 3 years ago

@sooonwoo , CIFAR is a bit different from ImageNet. On CIFAR, simpler augmentation is better but on ImageNet RandAugment should be better.

sooonwoo commented 3 years ago

Got it. Thanks for your help :)