charlesCXK / TorchSemiSeg

[CVPR 2021] CPS: Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision
MIT License
506 stars 74 forks source link

Pseudo labeling question #72

Open DeepHM opened 2 years ago

DeepHM commented 2 years ago

As I looked at your code, "model.eval()" is not used. (Instead, "torch.no_grad()" is used) ( ex. https://github.com/charlesCXK/TorchSemiSeg/blob/main/exp.voc/voc8.res50v3%2B.CPS%2BCutMix/train.py

line 209~219 )

However, I found that when each model generates a pseudo-label as a prediction, the accuracy of the pseudo-label is much higher when using "model.eval()" than when using "torch.no_grad()" .

Is this part okay? Or is the author's other intentions?

Thank you!

charlesCXK commented 2 years ago

Hi, we didn't notice this before. We just hoped we could obtain the pseudo label without the backward process, and we did not deliberately select model.eval() or no_ grad().

By the way, will model.eval() leads to a higher segmentation performance?

DeepHM commented 2 years ago

Hello. I'm working on semi-supervised semantic segmentation.

I did several checks to confirm my assumptions in semi-supervised semantic segmentation.

Through this, we have confirmed that the accuracy of pseudo-labels increases if you use 'model.eval()' in your implementation of an interesting research paper 'CPS'.

As you know, batch normalization is used in your study, so 'model.eval()' can serve as an important option.

I plan to do my research a little further and compare the above with the original method.

I will share the results with you in the near future.

good luck.