Open ysj9909 opened 1 year ago
@ysj9909 Thanks for your interests. I can add it by the end of May since I'm really busy recently. I will let you know once added.
Thank you!!!
@ysj9909 Thanks for your interests. I can add it by the end of May since I'm really busy recently. I will let you know once added.
大佬,您好,我有点不懂如何根据不同的query points(不同的position patch)去得到对应的attention map,是对该patch位置进行了掩码得到对应的attention还是?期待您的回复。
@haiduo 你好,就是算出对应位置的attention。
@haiduo 你好,就是算出对应位置的attention。
您好,您方便分享下这块生成attention的代码不,我还是不理解“不同的query points(不同的position patch)去得到对应的attention map”,可以的话加我的微信 abc1056530546,或者发我的邮箱 haiduohaiduo@outlook.com, 谢谢您的回复。
@haiduo 当然,代码没有好好整理,不过应该可以直接跑。 https://drive.google.com/drive/folders/1okWf2noIrnDdOBNYUfl_iZe3LLjb_59n?usp=share_link
@ysj9909 Very sorry about the late reply! The visualization code can be downloaded here: https://drive.google.com/drive/folders/1okWf2noIrnDdOBNYUfl_iZe3LLjb_59n?usp=share_link It is not well re-organized, but should work well.
Please let me know if you have any further questions or concerns.
Best, Xu
@haiduo 当然,代码没有好好整理,不过应该可以直接跑。 https://drive.google.com/drive/folders/1okWf2noIrnDdOBNYUfl_iZe3LLjb_59n?usp=share_link
@ysj9909 Very sorry about the late reply! The visualization code can be downloaded here: https://drive.google.com/drive/folders/1okWf2noIrnDdOBNYUfl_iZe3LLjb_59n?usp=share_link It is not well re-organized, but should work well.
Please let me know if you have any further questions or concerns.
Best, Xu
大佬,我把你的代码看了,明白了您的不同query对应不同的attention map。 首先,我不太认同这个针对某一个query(或 token)对应的计算的加权值(也即是该query对应的全局attention map的某一行)通过reshape成14x14就是attention map,最多只能称为该token与其他token对应的相关性(也就是一个向量)。其次,你发现的这个现象其实也不奇怪,因为之前已经发现,随着Vit系列的模型越深的block对应的attention map就表现出竖条形状(参考: vit_demo ) 所以不同行的query表现的形式基本一样,也就是您所谓的reshape后的14x14的“attention map”。 最后,感谢您的code分享,如有不同的想法,欢迎交流,谢谢。
@haiduo 谢谢你的思考和留言,帮助我们提高自己的work
Thank you for the release of the code for your paper.
I was curious whether you could additionally also share the code that produced Fig. 1 of the paper i.e. the attention maps.