YudeWang / SEAM

Self-supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation, CVPR 2020 (Oral)
MIT License
539 stars 97 forks source link

Optimization problem when training SEAM from scratch #12

Closed Angusky closed 3 years ago

Angusky commented 4 years ago

Hi, firstly thank you for releasing the code, I've successfully reproduced part of the result by using the provided weights.

However, when I tried to train SEAM from scratch (not using any pretrained weights), it seems ER loss easily goes down to 0 and ECR loss just cannot go down, then the model cannot improve anymore. I've tried to increase the loss weight of ECR loss but the outcome is still the same. Could you provide more details or suggestions on how you train SEAM without pretrained weights?

Thanks!

YudeWang commented 4 years ago

Hi @Angusky ,

Maybe increase the weight of classificaiton loss will be helpful. The classificaiton loss guides the network to preserve the basic shape of CAM, while the ER and ECR loss will revise the basic shape to a better result.

Angusky commented 4 years ago

Hi @Angusky ,

Maybe increase the weight of classificaiton loss will be helpful. The classificaiton loss guides the network to preserve the basic shape of CAM, while the ER and ECR loss will revise the basic shape to a better result.

Thanks for your quick reply, I will try that. May I ask did you encounter similar optimization problem in your training?

YudeWang commented 4 years ago

Thanks for your quick reply, I will try that. May I ask did you encounter similar optimization problem in your training?

The similar optimization problem may appear in revised CAM if ECR loss is not set correctly.