LanglandsLin / ActCLR

MIT License
24 stars 1 forks source link

The fifth formula in the paper #17

Open cervineee opened 1 month ago

cervineee commented 1 month ago

Hello, the fifth formula in the paper shows that the channel dimensions are weighted and summed, but it is not reflected in the code but is averaged. Can you tell me why this is so? cam = F.relu(cam.mean(dim=1, keepdim=True)) / (1e-8)

LanglandsLin commented 1 month ago

https://github.com/LanglandsLin/ActCLR/blob/9b1b4be17f33bf2479f72c7987588c3c909325b1/net/st_gcn.py#L94

Here the weights are calculated and multiplied to the channel.

cervineee commented 1 month ago

I'm very sorry to bother you again. Why is the mean operation performed here?cam = F.relu(cam.mean(dim=1, keepdim=True)) / (1e-8)

cervineee commented 1 month ago

Sorry to bother you, why do you use the index node list in the -1 dimension instead of using each node of the adjacency matrix? for body_part in body_parts: cam[:, :, :, body_part] = cam[:, :, :, body_part].mean(dim=-1, keepdim=True) This step of loop processing does not affect the original tensor cam. Why do you do this?

LanglandsLin commented 1 month ago

I'm very sorry to bother you again. Why is the mean operation performed here?cam = F.relu(cam.mean(dim=1, keepdim=True)) / (1e-8)

This is a common operation in grad cam. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization

LanglandsLin commented 1 month ago

Sorry to bother you, why do you use the index node list in the -1 dimension instead of using each node of the adjacency matrix? for body_part in body_parts: cam[:, :, :, body_part] = cam[:, :, :, body_part].mean(dim=-1, keepdim=True) This step of loop processing does not affect the original tensor cam. Why do you do this?

We note that actions usually take place in parts, so we do smoothing inside the parts to make them have the same activation weight.

cervineee commented 1 month ago

I'm very sorry to bother you again. Why is the mean operation performed here?cam = F.relu(cam.mean(dim=1, keepdim=True)) / (1e-8)

This is a common operation in grad cam. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization

Thank you for your answer, I went to check the relevant paper description, LcGrad-CAM (Formula 2), there is no specific information about performing the mean operation, including in your paper, in Formula 5, the relu information is cam = dz_dy_mean[:,:,None,None] * y, which is not specific mention the reasons for performing mean operations on channel dimensions

LanglandsLin commented 1 month ago

cam equals $\alpha$, and y is $A$ in Grad-CAM. The paper Grad-CAM uses summation, we use mean. It's the same because we divide by a factor.

cervineee commented 1 month ago

https://github.com/LanglandsLin/ActCLR/blob/9b1b4be17f33bf2479f72c7987588c3c909325b1/net/st_gcn.py#L94

Here the weights are calculated and multiplied to the channel.

Thank you very much for your answer. Due to my own reasons, I still have doubts about the mean operation. I think that in the calculation expressed in Formula 5 in the paper, a weighted sum cam = dz_dy_mean[:,:,None,None has been performed ] * y The subsequent cam.mean(dim=1, keepdim=True) is not reflected in the formula and is a redundant calculation, so I have been asking you about the role of mean calculation.

LanglandsLin commented 1 month ago

cam = dz_dy_mean[:,:,None,None] * y only performs channel-wise weighting, and cam.mean(dim=1, keepdim=True) performs channel-wise summation.

LanglandsLin commented 1 month ago

发自我的 iPhone在 2024年7月19日,16:14,cervineee @.***> 写道:

https://github.com/LanglandsLin/ActCLR/blob/9b1b4be17f33bf2479f72c7987588c3c909325b1/net/st_gcn.py#L94 Here the weights are calculated and multiplied to the channel.

Thank you very much for your answer. Due to my own reasons, I still have doubts about the mean operation. I think that in the calculation expressed in Formula 5 in the paper, a weighted sum cam = dz_dy_mean[:,:,None,None has been performed ] * y The subsequent cam.mean(dim=1, keepdim=True) is not reflected in the formula and is a redundant calculation, so I have been asking you about the role of mean calculation.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>