Open DeepHM opened 2 years ago
Hi, we didn't notice this before. We just hoped we could obtain the pseudo label without the backward process, and we did not deliberately select model.eval() or no_ grad().
By the way, will model.eval() leads to a higher segmentation performance?
Hello. I'm working on semi-supervised semantic segmentation.
I did several checks to confirm my assumptions in semi-supervised semantic segmentation.
Through this, we have confirmed that the accuracy of pseudo-labels increases if you use 'model.eval()' in your implementation of an interesting research paper 'CPS'.
As you know, batch normalization is used in your study, so 'model.eval()' can serve as an important option.
I plan to do my research a little further and compare the above with the original method.
I will share the results with you in the near future.
good luck.
As I looked at your code, "model.eval()" is not used. (Instead, "torch.no_grad()" is used) ( ex. https://github.com/charlesCXK/TorchSemiSeg/blob/main/exp.voc/voc8.res50v3%2B.CPS%2BCutMix/train.py
However, I found that when each model generates a pseudo-label as a prediction, the accuracy of the pseudo-label is much higher when using "model.eval()" than when using "torch.no_grad()" .
Is this part okay? Or is the author's other intentions?
Thank you!