speedinghzl / CCNet

CCNet: Criss-Cross Attention for Semantic Segmentation (TPAMI 2020 & ICCV 2019).
MIT License
1.42k stars 277 forks source link

cc_attention def INF #109

Open ZhangJT0127 opened 3 years ago

ZhangJT0127 commented 3 years ago

why use INF function?i want to know that.

Asthestarsfalll commented 3 years ago

@ZhangJT0127 论文中注意力图的其中一个维度是H+W-1,这是因为计算了两次自身所以要减去,在代码中直接使用INF函数来生成负无穷并加在energy_H上,这样使用softmax时就消除了两次计算自身的影响

Thatboy7 commented 2 years ago

@ZhangJT0127 论文中注意力图的其中一个维度是H+W-1,这是因为计算了两次自身所以要减去,在代码中直接使用INF函数来生成负无穷并加在energy_H上,这样使用softmax时就消除了两次计算自身的影响

Is there an another method? the torch.diag methon isn't supported by onnx version and when i use torch.eye ,my tensorrt doesn‘t support it either.

Thatboy7 commented 2 years ago

@ZhangJT0127 论文中注意力图的其中一个维度是H+W-1,这是因为计算了两次自身所以要减去,在代码中直接使用INF函数来生成负无穷并加在energy_H上,这样使用softmax时就消除了两次计算自身的影响

Is there an another method? the torch.diag methon isn't supported by onnx version and when i use torch.eye ,my tensorrt doesn‘t support it either.