Haochen-Wang409 / U2PL

[CVPR'22 & IJCV'24] Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels & Using Unreliable Pseudo-Labels for Label-Efficient Semantic Segmentation
Apache License 2.0
436 stars 61 forks source link

paper question #107

Closed DeepHM closed 1 year ago

DeepHM commented 1 year ago

Below is the content of infoNCE, a contrastive learning loss in your paper (page 3) :

Below is the contrastive setting of the experiments/pascal/1464/ours/config.yaml :

According to the contrastive learning setup in your paper, M is the number of anchor pixels, and N is the number of negative samples. (M=50, N=256) However, in config.yaml it seems that (num_negatives is 50) and (num_queries is 256). Regarding the infoNCE loss formula, I think it should be M=num_queries=256, N=num_negatives=50. Am I wrong?

Also, I don't quite understand the definition of positive and negative samples in contrastive learning described in your thesis. Can you explain positive samples and negative samples in more detail?

Thank you in advance.

Haochen-Wang409 commented 1 year ago

Please follow the configuration provided by our code.

Positive samples $(q, k^{+})$ mean these two features (possibly) belong to the same category, and InfoNCE aims to pull them together. Negative samples $(q, k^{-})$, on the contrary, indicate these two features (possibly) do not belong to the same category. We should push them away by minimizing InfoNCE.