jiasenlu / AdaptiveAttention

Implementation of "Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning"
https://arxiv.org/abs/1612.01887
Other
334 stars 74 forks source link

how to visulize the attention map of each word on a tested image #7

Open alandonald opened 7 years ago

alandonald commented 7 years ago

hi Jiasen, the demo.ipynb file in the project gives the example on how to captioning. but I don't know how to get the corresponding attention map for each word. could you show the example? thanks!

YeDeming commented 7 years ago

I think you can use the eval_visulization.lua to generate 'visu_gt_test.json' and 'atten_gt_test_1'. Selecting the commented code, it can generate attention in ground truth sentence or sample sentence. And then use visu/visAtten.lua to process the data.

I write the data out and use the code to generate heat map.

alandonald commented 7 years ago

thanks @YeDeming

taaadaaa1 commented 7 years ago

hello @YeDeming @alandonald can keras-cam generate heat maps for each word? if it can do that, please tell me how to do that. thank you!