sthalles / SimCLR

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
https://sthalles.github.io/simple-self-supervised-learning/
MIT License
2.19k stars 457 forks source link

A question about the "labels" #29

Closed kekehia123 closed 3 years ago

kekehia123 commented 3 years ago

Hi! I have a question about the definition of "labels" in the script "simclr.py".

On line 54 of "simclr.py", the authors defined:

labels = torch.zeros(logits.shape[0], dtype=torch.long).to(self.args.device)

So all the entries of "labels" are all zeros. But I think according to the paper, there should be an entry as 1 for the positive pair?

Thanks in advance for your reply!

towzeur commented 3 years ago

From my understanding, 'labels' as a variable name is confusing because it actually refers to the 'class' of the positive pairs.

N : size of the batch

logits will be a (N_VIEW x N, N_VIEW x N - 1) matrix. note : -1 because you don't take pairs from two same view.

All the N positives pairs' Cosine Similarity are stored in logits's first column.

According to pytorch documentation https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html

input has to be a Tensor of size either (minibatch, C)

This criterion expects a class index in the range [0, C-1][0,C−1] as the target for each value of a 1D tensor of size minibatch

image

So really, loss = self.criterion(logits, labels)

input : logits (N_VIEW x N, N_VIEW x N - 2) minibatch = N_VIEW x N C = N_VIEW x N - 1

target : labels (N_VIEW) Tensor and as labels is a 1D vector full of 0s, the class will always be 0 (the first col).

According to the pytorch CrossEntropyLoss formula, this will put the sim of the positive pair (from the anchor view) in the numerator as expected and the sum of similarity of every pair consititued from the anchor view.

kekehia123 commented 3 years ago

From my understanding, 'labels' as a variable name is confusing because it actually refers to the 'class' of the positive pairs.

N : size of the batch

logits will be a (N_VIEW x N, N_VIEW x N - 1) matrix. note : -1 because you don't take pairs from two same view.

All the N positives pairs' Cosine Similarity are stored in logits's first column.

According to pytorch documentation https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html

input has to be a Tensor of size either (minibatch, C)

This criterion expects a class index in the range [0, C-1][0,C−1] as the target for each value of a 1D tensor of size minibatch

image

So really, loss = self.criterion(logits, labels)

input : logits (N_VIEW x N, N_VIEW x N - 2) minibatch = N_VIEW x N C = N_VIEW x N - 1

target : labels (N_VIEW) Tensor and as labels is a 1D vector full of 0s, the class will always be 0 (the first col).

According to the pytorch CrossEntropyLoss formula, this will put the sim of the positive pair (from the anchor view) in the numerator as expected and the sum of similarity of every pair consititued from the anchor view.

I see! It's my fault. Thank you very much!

LinglanZhao commented 3 years ago

From my understanding, 'labels' as a variable name is confusing because it actually refers to the 'class' of the positive pairs.

N : size of the batch

logits will be a (N_VIEW x N, N_VIEW x N - 1) matrix. note : -1 because you don't take pairs from two same view.

All the N positives pairs' Cosine Similarity are stored in logits's first column.

According to pytorch documentation https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html

input has to be a Tensor of size either (minibatch, C)

This criterion expects a class index in the range [0, C-1][0,C−1] as the target for each value of a 1D tensor of size minibatch

image

So really, loss = self.criterion(logits, labels)

input : logits (N_VIEW x N, N_VIEW x N - 2) minibatch = N_VIEW x N C = N_VIEW x N - 1

target : labels (N_VIEW) Tensor and as labels is a 1D vector full of 0s, the class will always be 0 (the first col).

According to the pytorch CrossEntropyLoss formula, this will put the sim of the positive pair (from the anchor view) in the numerator as expected and the sum of similarity of every pair consititued from the anchor view.

Hi! I'd like to confirm that N_VIEW == 2 as in the paper and the default args in the code. If N_VIEW > 2, with logits.shape = (N_VIEW x N, N_VIEW x N - 1), N_VIEW x N - 1 contains at least one more positive pairs (except the one indexed with 0) which will be treated as negative pairs.

kekehia123 commented 3 years ago

From my understanding, 'labels' as a variable name is confusing because it actually refers to the 'class' of the positive pairs. N : size of the batch logits will be a (N_VIEW x N, N_VIEW x N - 1) matrix. note : -1 because you don't take pairs from two same view. All the N positives pairs' Cosine Similarity are stored in logits's first column. According to pytorch documentation https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html

input has to be a Tensor of size either (minibatch, C)

This criterion expects a class index in the range [0, C-1][0,C−1] as the target for each value of a 1D tensor of size minibatch

image So really, loss = self.criterion(logits, labels) input : logits (N_VIEW x N, N_VIEW x N - 2) minibatch = N_VIEW x N C = N_VIEW x N - 1 target : labels (N_VIEW) Tensor and as labels is a 1D vector full of 0s, the class will always be 0 (the first col). According to the pytorch CrossEntropyLoss formula, this will put the sim of the positive pair (from the anchor view) in the numerator as expected and the sum of similarity of every pair consititued from the anchor view.

Hi! I'd like to confirm that N_VIEW == 2 as in the paper and the default args in the code. If N_VIEW > 2, with logits.shape = (N_VIEW x N, N_VIEW x N - 1), N_VIEW x N - 1 contains at least one more positive pairs (except the one indexed with 0) which will be treated as negative pairs.

Yes. I think in that case, the loss function needs to be modified too.

here101 commented 2 years ago

From my understanding, 'labels' as a variable name is confusing because it actually refers to the 'class' of the positive pairs. N : size of the batch logits will be a (N_VIEW x N, N_VIEW x N - 1) matrix. note : -1 because you don't take pairs from two same view. All the N positives pairs' Cosine Similarity are stored in logits's first column. According to pytorch documentation https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html

input has to be a Tensor of size either (minibatch, C)

This criterion expects a class index in the range [0, C-1][0,C−1] as the target for each value of a 1D tensor of size minibatch

image So really, loss = self.criterion(logits, labels) input : logits (N_VIEW x N, N_VIEW x N - 2) minibatch = N_VIEW x N C = N_VIEW x N - 1 target : labels (N_VIEW) Tensor and as labels is a 1D vector full of 0s, the class will always be 0 (the first col). According to the pytorch CrossEntropyLoss formula, this will put the sim of the positive pair (from the anchor view) in the numerator as expected and the sum of similarity of every pair consititued from the anchor view.

Hi! I'd like to confirm that N_VIEW == 2 as in the paper and the default args in the code. If N_VIEW > 2, with logits.shape = (N_VIEW x N, N_VIEW x N - 1), N_VIEW x N - 1 contains at least one more positive pairs (except the one indexed with 0) which will be treated as negative pairs.

请问你解决这个问题了吗,我也想增加视角数。谢谢!

Have you solved the problem,I have the same question with you, Thanks in advance

kekehia123 commented 2 years ago

From my understanding, 'labels' as a variable name is confusing because it actually refers to the 'class' of the positive pairs. N : size of the batch logits will be a (N_VIEW x N, N_VIEW x N - 1) matrix. note : -1 because you don't take pairs from two same view. All the N positives pairs' Cosine Similarity are stored in logits's first column. According to pytorch documentation https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html

input has to be a Tensor of size either (minibatch, C)

This criterion expects a class index in the range [0, C-1][0,C−1] as the target for each value of a 1D tensor of size minibatch

image So really, loss = self.criterion(logits, labels) input : logits (N_VIEW x N, N_VIEW x N - 2) minibatch = N_VIEW x N C = N_VIEW x N - 1 target : labels (N_VIEW) Tensor and as labels is a 1D vector full of 0s, the class will always be 0 (the first col). According to the pytorch CrossEntropyLoss formula, this will put the sim of the positive pair (from the anchor view) in the numerator as expected and the sum of similarity of every pair consititued from the anchor view.

Hi! I'd like to confirm that N_VIEW == 2 as in the paper and the default args in the code. If N_VIEW > 2, with logits.shape = (N_VIEW x N, N_VIEW x N - 1), N_VIEW x N - 1 contains at least one more positive pairs (except the one indexed with 0) which will be treated as negative pairs.

请问你解决这个问题了吗,我也想增加视角数。谢谢!

Have you solved the problem,I have the same question with you, Thanks in advance

Sorry, I did not try multi-view version...

4pygmalion commented 4 months ago

To clear your understanding, i illustrated the matrix operation simCLR.
image