HiLab-git / CA-Net

Code for Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation.
168 stars 39 forks source link

Visualizing the attention weight #1

Open 506126283 opened 3 years ago

506126283 commented 3 years ago

Hello, may I ask how to save the attention weight map in the middle step of the network. Thank you very much.

JoeGue commented 3 years ago

hello, you can transform the attention weight in the middle step from tensor to numpy, and then save to Image (.png, .jpg).

rp775 commented 3 years ago

Hi Joe: sorry, I couldn't understand how can I save attention weight map in middle step of the network. I see network.py file has atten3_map . do I need to save this as image . atten3_map = att3.cpu().detach().numpy().astype(np.float)

On validation.py file, model returning only output. output, atten2_map, atten3_map = model(image) # model output

could you please provide little more information how can I get attention weight for test data to see the results?

JoeGue commented 3 years ago

Hi rp775, Yes, you need to save the attention weight as an image. As you see in the validation.py and 'network.py', we have provided the commands obtaining the attention weight, so you can just replace 'output = model(image)' with 'output, atten2_map, atten3_map = model(image) # model output'. And the same idea, you need to return the attention weight in the 'network.py'.

Sincerely, Joe

rp775 notifications@github.com 于2020年10月25日周日 上午11:02写道:

Hi Joe: sorry, I couldn't understand how can I save attention weight map in middle step of the network. I see network.py file has atten3_map . do I need to save this as image . atten3_map = att3.cpu().detach().numpy().astype(np.float)

On validation.py file, model returning only output. output, atten2_map, atten3_map = model(image) # model output

could you please provide little more information how can I get attention weight for test data to see the results?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/HiLab-git/CA-Net/issues/1#issuecomment-716086302, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFSH2RUE4IL6GM5X6AHFHLTSMOIODANCNFSM4SJQSCFQ .

rp775 commented 3 years ago

thanks Joe for the reply.

  1. i enabled this line on network.py atten3_map = att3.cpu().detach().numpy().astype(np.float) atten3_map = ndimage.interpolation.zoom(atten3_map, [1.0, 1.0, 224 / atten3_map.shape[2], 300 / atten3_map.shape[3]], order=0)

and return out, atten2_map, atten3_map

  1. i got atten3_map ndarray from validation.py

    from skimage.io import imsave output, atten2_map, atten3_map = model(image) # model output imsave("./result/atten_map/"+str(step)+".png", atten3_map);

but when i run this change . i get error this . Not sure what i am missing. i cant see result as like shown on the paper[predicted segmented image for test data].

raise ValueError("Image must be 2D (grayscale, RGB, or RGBA).") ValueError: Image must be 2D (grayscale, RGB, or RGBA).

Am i missing anything on this ?

jiatian56 commented 3 years ago

Hi rp775, I also encountered the same problem when trying to save the predicted segmentation map in validation.py.

raise ValueError("Image must be 2D (grayscale, RGB, or RGBA).") ValueError: Image must be 2D (grayscale, RGB, or RGBA).

Have you solved it?

RliFJD commented 2 years ago

嗨,rp775,是的,您需要将注意力权重保存为图像。正如您在 validation.py 和“network.py”中看到的那样,我们已经提供了获取注意力权重的命令,因此您只需将“output = model(image)”替换为“output,atten2_map,atten3_map = model(image) # model output”。同样的想法,您需要在“network.py”中返回注意力权重。诚挚的, Joe rp775 notifications@github.com 于2020年10月25日周日 上午11:02写道: ... 嗨乔:对不起,我不明白如何在网络的中间步骤中节省注意力权重图。我看到 network.py 文件有atten3_map。我需要将其另存为图像吗?atten3_map = att3.cpu().detach().numpy().astype(np.float) 在 validation.py 文件上,模型仅返回输出。output, atten2_map, atten3_map = model(image) # model output 你能提供更多的信息吗?我如何获得测试数据的注意力权重来查看结果?— 您收到此消息是因为您发表了评论。直接回复此电子邮件,在GitHub上查看<#1(评论)>,或取消订阅https://github.com/notifications/unsubscribe-auth/AFSH2RUE4IL6GM5X6AHFHLTSMOIODANCNFSM4SJQSCFQ

image image image

I have finished what you said, but I am not here generat the attention weight graph in the result folder.

BrightHuang0519 commented 1 year ago

Hi, everyone. I've tried everything, but I still can't solve the problem,Please tell me how can I get attention weight for test data to see the results? thankyou very much

JoeGue commented 1 year ago

Hi everyone, as I previously said. If you want to get the attention weight for test data to see the attention weight map, you just need to replace 'output = model(image)' with 'output, atten2_map, atten3_map = model(image)', and the same idea, you need to return the attention weight in the 'network.py'. Then you need to save the returned attention map 'imsave("./result/atten_map/"+str(step)+".png", atten3_map);'. Finally, you should prepare the original image as well as the saved attention map image and run the 'show_fused_heatmap.py' file.