HobbitLong / SupContrast

PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)
BSD 2-Clause "Simplified" License
2.98k stars 525 forks source link

Can anyone reproduce this result or is it just me? #118

Closed isacyoo closed 1 year ago

isacyoo commented 1 year ago

Hi, I've been playing around with the loss function code to get a better understanding of the equations and I have encountered results that does not make sense to me.

Assume that two images A and B give similar embeddings. From my understanding, if A and B have different labels, the loss function should penalize more than when they have the same labels.

crit = SupConLoss() batch_size = 16 embed_dim = 768

features = torch.ones(batch_size, 1, embed_dim) # All embeddings are same and no augmentation features = nn.functional.normalize(features, dim=2)

Case 1: Same embeddings and same labels

labels = torch.ones(batch_size) # Try when the labels are all same print(crit(features, labels)/batch_size) # Gives 0.1693

Case 2: Same embeddings and different labels

labels[:(batch_size//2)] = 0 # Now half of them have different labels print(crit(features, labels)/batch_size) # Also gives 0.1693

I get same loss values for both cases and I was not able to figure out why.

Thanks in advance

MischaQI commented 1 year ago

Hi, have you figured it out?