-
Thanks for sharing the great work.
I would like to reproduce the paper's visualization results for the decoder self-attention weights in Figure 1 and the encoder self-attention weights in Figure 8.…
-
Thank you for the release of the code for your paper.
I was curious whether you could additionally also share the code that produced Fig. 1 of the paper i.e. the attention maps.
-
Firstly, this is a maningful code. But I want to understand the principle of the code. I do not understand why you divided into patches.
Can you tell me the details of this code? looking forward to y…
-
Hi, I find your work very interesting. Is there a way to visualize self attention maps also in the existing repo?
-
How did you obtain the word set, and how did you create a visualization that shows how closely it is related to each prototype?
In the code, vocabulary is used by taking the weight of the LLM's emb…
-
Thanks for your great work, I was wondering if it is possible to visualize deformable attention because it is not like DETR, so I was curious how to visualize heat maps of deformable attention
-
Hi
I wanna ask how to get predicted outputs (weight i, j) in Fig. 5.
Does it mean Softmax(Query j dot product Key i / dimension^0.5) or Softmax(Query j dot product Key i / dimension^0.5) Value i? bu…
-
Hello, could authors of anyone here share the idea of how to use Attention Rollout to visualize the attention on the input frames?
-
Thanks for your great work, can you provide visual code for attention in deformable decoders?
-
I refer to the online attention visualization tutorial on vit, but it can't achieve the effect in your paper. Can you share the code of the visualization part? Thank you very much