cherubicXN / afmplusplus

2 stars 0 forks source link

Question about Grad Visualization. #1

Open siyuada opened 4 years ago

siyuada commented 4 years ago

Hey, thanks for your excellent work! I noticed that in your paper, you gave the visualized interpretation which help us understand what the network learned.

For this interesting result, I wanna visualize your pre-network -- AFM. However, I have some question about that: "In the computation, the gradients for the last layer are set to 1", is the last layer related to the last_conv layer for afm output? And is that means use a ones map to backward?

I tried the guided backprop method to visualize the AFM network, by sum the two afm output to get a scalar, then use output.backward(). I saved the gradients of input image as belows: gb

Which in your paper, it looks more clean and more sense. 图片

Could give me some help for how to get a clean saliency map as yours? Thanks!

cherubicXN commented 4 years ago

Hey, thanks for your excellent work! I noticed that in your paper, you gave the visualized interpretation which help us understand what the network learned.

For this interesting result, I wanna visualize your pre-network -- AFM. However, I have some question about that: "In the computation, the gradients for the last layer are set to 1", is the last layer related to the last_conv layer for afm output? And is that means use a ones map to backward?

I tried the guided backprop method to visualize the AFM network, by sum the two afm output to get a scalar, then use output.backward(). I saved the gradients of input image as belows: gb

Which in your paper, it looks more clean and more sense. 图片

Could give me some help for how to get a clean saliency map as yours? Thanks!

Hi, thanks for your question. I just set the gradient of last layer as a torch.ones((1,2,H,W)) for guided backpropagation. I will release the code as soon as possible.

siyuada commented 4 years ago

Thanks for your quick reply! I tried to set the last layer grad as a torch.ones((1,2,H,W)), but the result I get is just like before.

Did you visualize afm grad output before? Is that clean grad map contributed by the outlier removal? Because for a dense attractive map, it seems that each pixel in image will donate something to the predict value there, so the saliency map may be dense, could not be as clean as yours.

Also, I tried to reduce the number of pixel to calculate grad, use the threshold of abs(ax(after log transformation)<=0.02), it will be cleaner than before...(still a gap with yours..) gb

Sorry to bother, I am just getting in touch to guided backpropagation. I set the last layer by below code, output is the afm map, sumout is the sum of the output (just used for calculate grad) torch.autograd.grad(outputs=output, inputs=sumout, only_inputs=True, grad_outputs=torch.ones_like(output,requires_grad=True), allow_unused=True, retain_graph=True)

Any way, this interpretation result is interesting, hope for your help.