facebookresearch / moco

PyTorch implementation of MoCo: https://arxiv.org/abs/1911.05722
MIT License
4.74k stars 779 forks source link

Why all targets are zeros? #24

Closed Oktai15 closed 4 years ago

Oktai15 commented 4 years ago

Why all targets are zeros?

https://github.com/facebookresearch/moco/blob/3631be074a0a14ab85c206631729fe035e54b525/moco/builder.py#L155

KaimingHe commented 4 years ago

Positive samples are the zeroth. https://github.com/facebookresearch/moco/blob/3631be074a0a14ab85c206631729fe035e54b525/moco/builder.py#L149

Oktai15 commented 4 years ago

@KaimingHe, but it also contains negative samples.. why does it have only zeros in targets?

osdf commented 4 years ago

labels is the ground truth 'index' for the 1+len(queue) wide tensor logits. As shown in the code snippet of KaimingHe's answer, this is always index 0 (l_pos is the first column of the result tensor logits). logits later is feed into the CrossEntropy criterion, i.e. the contrasting happens through the entanglement of the logit scores by the softmax function.

Oktai15 commented 4 years ago

@osdf oh, got it, thank you!

mmiakashs commented 4 years ago

I have a small confusion regarding this issue (maybe I am missing something): can the situation occur when the queue of negative examples contains some samples which are similar or exactly the same sample of the l_pos? If it happens then should some labels of l_neg be one instead of zero?

KaimingHe commented 4 years ago

I have a small confusion regarding this issue (maybe I am missing something): can the situation occur when the queue of negative examples contains some samples which are similar or exactly the same sample of the l_pos? If it happens then should some labels of l_neg be one instead of zero?

Yes, it could happen. This only matters when the queue is too large. On ImageNet, the queue (65536) is ~6% of the dataset size (1.28M), so the chance of having a positive in the queue is 6% at the first iteration of each epoch, and reduces to 0% in the 6-th percentile of iterations of each epoch. This noise is negligible, so for simplicity we don't handle it. If the queue is too large, or the dataset is too small, an extra indicator should be used on these positive targets in the queue.

mmiakashs commented 4 years ago

Sounds good, Thanks for the clear explanation. And also a big thanks for the concise implementation of MOCO, I learned a lot :smiley:

KeremTurgutlu commented 3 years ago

I have a follow up question regarding negatives. Is there an intuition behind not using encoder negatives along with the positives(index 0). For example calculating cross entropy on the logits matrix Nx(N+K) instead of Nx(1+K) where labels are torch.arange(N). Also wouldn't this mitigate BN signature issue for easily identifying the positives from the same batch because now also there will be N-1 negatives for with the same batchnorm stats?

Edit: I implemented this approach, trained without Shuffle BN on 1 GPU and it didn't overfit or had any performance issues on a downstream task.

Here is a link to pretraining and downstream training. With MoCo and this approach without Shuffle BN on 1 GPU performance improves from 18.2% to 71.2% (random init vs finetuning on MoCo weights).

qishibo commented 2 years ago

I have a follow up question regarding negatives. Is there an intuition behind not using encoder negatives along with the positives(index 0). For example calculating cross entropy on the logits matrix Nx(N+K) instead of Nx(1+K) where labels are torch.arange(N). Also wouldn't this mitigate BN signature issue for easily identifying the positives from the same batch because now also there will be N-1 negatives for with the same batchnorm stats?

Edit: I implemented this approach, trained without Shuffle BN on 1 GPU and it didn't overfit or had any performance issues on a downstream task.

Here is a link to pretraining and downstream training. With MoCo and this approach without Shuffle BN on 1 GPU performance improves from 18.2% to 71.2% (random init vs finetuning on MoCo weights).

@KeremTurgutlu so you mean the label is 1, 0*(N-1), 0*K instead of 1, 0*K for every line?

skaudrey commented 2 years ago

I have a small confusion regarding this issue (maybe I am missing something): can the situation occur when the queue of negative examples contains some samples which are similar or exactly the same sample of the l_pos? If it happens then should some labels of l_neg be one instead of zero?

Yes, it could happen. This only matters when the queue is too large. On ImageNet, the queue (65536) is ~6% of the dataset size (1.28M), so the chance of having a positive in the queue is 6% at the first iteration of each epoch, and reduces to 0% in the 6-th percentile of iterations of each epoch. This noise is negligible, so for simplicity we don't handle it. If the queue is too large, or the dataset is too small, an extra indicator should be used on these positive targets in the queue.

@KaimingHe

I have a followed question about K. I am confused for measuring the chance of meeting a positive in queue on video data. Is it based on the number of videos or the number of video frames?

If I have 4K videos, each time I sample a clip (like 32 frames from the videos) as the input sample, and I want to classify videos. The K that I set is 65536, but the chance of having a positive in the queue is 65536/40000 = 1.64, that is already larger than 100%. In this case, can I simply set a smaller K like 2400 to keep the small chance of meeting a positive in the queue?

However, in VideoMoCo, the model for video classification, they use 65536 for K. Is this because the chance is measured on frames rather than number of videos? E.g., if each video has 300 frames, the chance is the 65536/40000/300 = 0.55%.

Which measure is more reasonable?

clarkkent0618 commented 2 years ago

I have a small confusion regarding this issue (maybe I am missing something): can the situation occur when the queue of negative examples contains some samples which are similar or exactly the same sample of the l_pos? If it happens then should some labels of l_neg be one instead of zero?

Yes, it could happen. This only matters when the queue is too large. On ImageNet, the queue (65536) is ~6% of the dataset size (1.28M), so the chance of having a positive in the queue is 6% at the first iteration of each epoch, and reduces to 0% in the 6-th percentile of iterations of each epoch. This noise is negligible, so for simplicity we don't handle it. If the queue is too large, or the dataset is too small, an extra indicator should be used on these positive targets in the queue.

@KaimingHe

I have a followed question about K. I am confused for measuring the chance of meeting a positive in queue on video data. Is it based on the number of videos or the number of video frames?

If I have 4K videos, each time I sample a clip (like 32 frames from the videos) as the input sample, and I want to classify videos. The K that I set is 65536, but the chance of having a positive in the queue is 65536/40000 = 1.64, that is already larger than 100%. In this case, can I simply set a smaller K like 2400 to keep the small chance of meeting a positive in the queue?

However, in VideoMoCo, the model for video classification, they use 65536 for K. Is this because the chance is measured on frames rather than number of videos? E.g., if each video has 300 frames, the chance is the 65536/40000/300 = 0.55%.

Which measure is more reasonable?

Actually i cannot understand the ans of KaiMing. Do you know the meaning of "so the chance of having a positive in the queue is 6% at the first iteration of each epoch, and reduces to 0% in the 6-th percentile of iterations of each epoch" how this comes? thank you

solauky commented 2 years ago

来信收到,谢谢!

clarkkent0618 commented 2 years ago

I have a small confusion regarding this issue (maybe I am missing something): can the situation occur when the queue of negative examples contains some samples which are similar or exactly the same sample of the l_pos? If it happens then should some labels of l_neg be one instead of zero?

Yes, it could happen. This only matters when the queue is too large. On ImageNet, the queue (65536) is ~6% of the dataset size (1.28M), so the chance of having a positive in the queue is 6% at the first iteration of each epoch, and reduces to 0% in the 6-th percentile of iterations of each epoch. This noise is negligible, so for simplicity we don't handle it. If the queue is too large, or the dataset is too small, an extra indicator should be used on these positive targets in the queue.

@KaimingHe I have a followed question about K. I am confused for measuring the chance of meeting a positive in queue on video data. Is it based on the number of videos or the number of video frames? If I have 4K videos, each time I sample a clip (like 32 frames from the videos) as the input sample, and I want to classify videos. The K that I set is 65536, but the chance of having a positive in the queue is 65536/40000 = 1.64, that is already larger than 100%. In this case, can I simply set a smaller K like 2400 to keep the small chance of meeting a positive in the queue? However, in VideoMoCo, the model for video classification, they use 65536 for K. Is this because the chance is measured on frames rather than number of videos? E.g., if each video has 300 frames, the chance is the 65536/40000/300 = 0.55%. Which measure is more reasonable?

Actually i cannot understand the ans of KaiMing. Do you know the meaning of "so the chance of having a positive in the queue is 6% at the first iteration of each epoch, and reduces to 0% in the 6-th percentile of iterations of each epoch" how this comes? thank you

@skaudrey

VaticanCameos99 commented 10 months ago

labels is the ground truth 'index' for the 1+len(queue) wide tensor logits. As shown in the code snippet of KaimingHe's answer, this is always index 0 (l_pos is the first column of the result tensor logits). logits later is feed into the CrossEntropy criterion, i.e. the contrasting happens through the entanglement of the logit scores by the softmax function.

Hi, I'm still having trouble understanding this. Given that the labels denote the index at which we have a positive pair, why do we still use CE loss as the Contrastive learning loss? Please can someone explain exactly how does it resolve to softmax between the logits? More specifically, can someone explain this: "the contrasting happens through the entanglement of the logit scores by the softmax function."