Closed Jingfeng-Tang closed 1 year ago
We appreciate your recognition of our work.
For your question, firstly, the mean_loss
and local_mean
correspond to ${\lambdai}$ and $\lambda{global}$ respectively, and we just achieved the $max({\lambdai}, \lambda{global})$ of the formula(12) for the line 143-147.
The line 149-151 are truly complete the formula(12) and (13):
clamp_loss
, where the relatively large gradients are greater than zero, otherwise they are less than or equal to zero.torch.clamp
to perform gradient clipping in the regions where the values of clamp_loss
are greater than zero.Thanks.
Thanks for you brilliant work in WSSS! I have a question about gradient clip. https://github.com/hustvl/WeakTr/blob/main/OnlineRetraining/segm/model/decoder.py I think 143-147 code has achieved the result(masked gradient patches) of the formula(12)and(13) in the paper. line 143-147
In the paper, I did not find an explanation of the following code. Can you give an explanation? Thanks. line149-151