hsiangyuzhao / RCPS

official implementation of rectified contrastive pseudo supervision
MIT License
57 stars 3 forks source link

About the negative sample and negative voxel #22

Closed yyyyy-aa closed 3 months ago

yyyyy-aa commented 3 months ago

Hello, sorry to bother you again. I have some doubts about the negative sample selection and negative voxel sampling for bidirectional voxel contrast loss mentioned in the article. Can you further explain the content you mentioned in the article and code? 1.How negative sample images are selected in each training batch and how they differ from the input volume x. 2.It is mentioned in the article that "the samples belonging to the same class with ψi (i = 1, 2) are excluded." Then the negative voxels are sampled according to the confidence to calculate the contrast loss. Can you explain this process in detail? Have a nice day.

hsiangyuzhao commented 3 months ago

1.How negative sample images are selected in each training batch and how they differ from the input volume x.

Negative samples are defined as voxels that belong to a different class from the anchor voxel. Thus, the negative samples vary when you are viewing different voxels as anchors.

2.It is mentioned in the article that "the samples belonging to the same class with ψi (i = 1, 2) are excluded." Then the negative voxels are sampled according to the confidence to calculate the contrast loss. Can you explain this process in detail?

Considering a binary segmentation and an anchor voxel belonging to the foreground class, the positive voxel is the voxel at the same location but from a different augmentation view, while the negative voxels are the ones that belong to the background class. However, in semi-supervised learning, the voxels might be unlabeled. Thus, we use the model prediction to pseudo-label all the voxels and determine which class they belong to. All of the voxels with possibility lower than 0.5 could be viewed as negative samples, but we need to find the ones that are more certain (possibility close to 0, which means that the model is more confident), so that we reduce the possibility that an actual positive voxel is misclassified as a negative one.