Closed NorbertZheng closed 1 year ago
SupCon, Supervised Contrastive Loss, Outperforms Cross Entropy Loss?
SupCon loss consistently outperforms cross-entropy with standard data augmentations.
Supervised Contrastive Learning (SupCon), by Google Research, Boston University, and MIT. 2020 NeurIPS, Over 1200 Citations.
Cross entropy, self-supervised contrastive loss and supervised contrastive loss.
This proposed loss contrasts the set of all samples from the same class as positives against the negatives from the remainder of the batch, using the labels.
Supervised vs. self-supervised contrastive losses. As demonstrated by the photo of the black and white puppy, taking class label information into account results in an embedding space where elements of the same class are more closely aligned than in the self-supervised case, even the appearance are not the same. Because they are coming from the same class based on the supervised labels.
There are two variants $L{out}^{sup}$ and $L{in}^{sup}$:
$$ L{in}^{sup} \leq L{out}^{sup}, $$
Top-1 classification accuracy on ResNet-50 for various datasets.
Top-1/Top-5 accuracy results on ImageNet for AutoAugment with ResNet-50 and for Stacked RandAugment with ResNet-101 and ResNet-200.
Training with supervised contrastive loss makes models more robust to corruptions in images.
Accuracy against (a) Hyperparameters, (b) Batch Size, (c) Epochs, and (d) Temperature.
Numbers are mAP for VOC2007; mean-per-class accuracy for Aircraft, Pets, Caltech, and Flowers; and top-1 accuracy for all other datasets.
[2020 NeurIPS] [SupCon] Supervised Contrastive Learning.
Sik-Ho Tsang. Brief Review: Supervised Contrastive Learning.