damo-cv / TransReID

[ICCV-2021] TransReID: Transformer-based Object Re-Identification
MIT License
823 stars 178 forks source link

how to do attention visualization? #41

Open oldDriver69 opened 2 years ago

oldDriver69 commented 2 years ago

I refer to the online attention visualization tutorial on vit, but it can't achieve the effect in your paper. Can you share the code of the visualization part? Thank you very much

heshuting555 commented 2 years ago

Hi, We refer to the following code base for the visualization code in the paper. https://github.com/jacobgil/vit-explain https://github.com/jeonsworld/ViT-pytorch/blob/main/visualize_attention_map.ipynb

Hope the answer is helpful.

realjoshqsun commented 2 years ago

Hi, We refer to the following code base for the visualization code in the paper. https://github.com/jacobgil/vit-explain https://github.com/jeonsworld/ViT-pytorch/blob/main/visualize_attention_map.ipynb

Hope the answer is helpful.

Hi,

These two methods are transformer-specified. However, Grad-CAM that is mentioned in your paper is not transformer-specified, and it can be worked with different structures to the best of my knowledge.

ww5171351 commented 2 years ago

你好, 论文中的可视化代码我们参考下面的代码库。 https://github.com/jacobgil/vit-explain https://github.com/jeonsworld/ViT-pytorch/blob/main/visualize_attention_map.ipynb

希望答案是有帮助的。

你好,对于这个可视化,我还是遇到些问题。能否留个联系方式,想请教一下。我的qq563679292.非常感谢