YudeWang / SEAM

Self-supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation, CVPR 2020 (Oral)
MIT License
542 stars 97 forks source link

Loss_ER is too small. Is it really helpful? #1

Closed zzzzzz0407 closed 4 years ago

zzzzzz0407 commented 4 years ago

image Thanks for your wonderful job. However, when I read your code, I find that the loss_er is too small compared with other two loss. The reason is that you apply mean operation directly, however, there are a lot of 0(you set 0 for C-1 channels). image And I find the improvement within loss_er is much small in your paper compared with loss_ecr, I argue this may a bug? image I am sorry I do not have enough gpus to reproduce it. Look forward to your reply

YudeWang commented 4 years ago

Hi @zzzzzz0407, thanks for your suggestion. Before coming to the idea of PCM, I have noticed this and fixed it. However, I find there should be a small weight for loss_er to learn, or the results are not good enough. Therefore, I roll back to the old version as you see, which looks like a negligence but works, and do not need to set a weight parameter for loss_er. It is a good idea to verify whether loss_er is necessary when SEAM have loss_ecr. I would like to do some experiments later (my server is down and I cannot back to my work due to the city restriction for COVID-19).

zzzzzz0407 commented 4 years ago

Thanks for your reply.