YudeWang / SEAM

Self-supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation, CVPR 2020 (Oral)
MIT License
539 stars 97 forks source link

The paramters in optimizer #21

Closed lzyhha closed 3 years ago

lzyhha commented 3 years ago

Hello, I note that the order of paramters (params lr wd) in PolyOptimizer is different from official SGD(params lr momentum). So I think the value of wd will actually be assigned to momentum. Is it so?

class PolyOptimizer(torch.optim.SGD):

    def __init__(self, params, lr, weight_decay, max_step, momentum=0.9):
        super().__init__(params, lr, weight_decay)
YudeWang commented 3 years ago

@lzyhha Thanks for your checking. This part of code is borrowed from https://github.com/jiwoon-ahn/psa, and it seems like a small bug which invalidates the momentum. Maybe you can also report this bug to psa repository.

lzyhha commented 3 years ago

OK, thank you for your reply:blush:.