Syliz517 / CLIP-ReID

Official implementation for "CLIP-ReID: Exploiting Vision-Language Model for Image Re-identification without Concrete Text Labels" (AAAI 2023)
MIT License
276 stars 43 forks source link

Request for attention visualization #24

Closed syh4661 closed 2 weeks ago

syh4661 commented 1 year ago

Hello, I appreciate your kind words about the excellent results and research sharing.

Regarding the Visualization of CLIP-ReID mentioned in the Ablation Studies and Analysis section of the paper by

Chefer, H.; Gur, S.; and Wolf, L. 2021 titled "Transformer interpretability beyond attention visualization" in the Proceedings of the IEEE/CVF CVPR, pages 782–791.

I would like to visualize my training results similar to what you did in Figure 3 by referring to your paper. Could I please get access to the code you used for visualization?

zedsharifi commented 9 months ago

Did you get it? Can you send it to me?

sourabh-patil commented 9 months ago

https://github.com/jacobgil/pytorch-grad-cam/tree/master

seunghee-han commented 4 months ago

I would appreciate it if you could let me know too