Closed linzhlalala closed 2 years ago
Dear @linzhlalala,
Thanks for your attention to our paper!
In our experiment, we visualize the attention weights of the first cross attention layer.
To deal with the multiple heads, we select the attention weights from the head which has the largest differences in the attention values.
The visualization code will be cleaned and updated here in the future.
Best, Zhihong
Thanks for sharing.
Dear @linzhlalala,
Thanks for your attention to our paper!
In our experiment, we visualize the attention weights of the first cross attention layer.
To deal with the multiple heads, we select the attention weights from the head which has the largest differences in the attention values.
The visualization code will be cleaned and updated here in the future.
Best, Zhihong
Hi Zhihong, Thank you for sharing your code. I am also interested in the Visualizations of image-text attention mapping part in the paper. Have you cleaned the visualization code? I don't find a good way to visualize the transformer-based model.
Hi, any progress?
I don't need the code. The idea is helpful enough. Thanks.
Hi Zhihong, Thank you for sharing your code. I am interested in the Visualizations of image-text attention mapping part in the paper. Can you share which approaches you are using for this? (Other repository or code) I am trying to do this but didn't find a solution for the Transformer-based models.