Closed PerrySkywalker closed 1 year ago
Ok, thank you very much.
Sorry, I still don't understand. The output shape of the nystrom attention (when return_attn=True), is [B, num_head, x, x] (where I don't know what x is). Do you have the code for this?
https://github.com/jacobgil/vit-explain
Sorry, I still don't understand. The output shape of the nystrom attention (when return_attn=True), is [B, num_head, x, x] (where I don't know what x is). Do you have the code for this?
https://github.com/jacobgil/vit-explain, watch this.
https://github.com/jacobgil/vit-explain
Sorry, I still don't understand. The output shape of the nystrom attention (when return_attn=True), is [B, num_head, x, x] (where I don't know what x is). Do you have the code for this?
https://github.com/jacobgil/vit-explain, watch this.
Do you know how to take out the column corresponding to the class token?
https://github.com/jacobgil/vit-explain
Sorry, I still don't understand. The output shape of the nystrom attention (when return_attn=True), is [B, num_head, x, x] (where I don't know what x is). Do you have the code for this?
https://github.com/jacobgil/vit-explain, watch this.
Can you provide your code about visualization part?thanks very much!
https://github.com/jacobgil/vit-explain
Sorry, I still don't understand. The output shape of the nystrom attention (when return_attn=True), is [B, num_head, x, x] (where I don't know what x is). Do you have the code for this?
https://github.com/jacobgil/vit-explain, watch this.
Can you provide your code about visualization part?thanks very much!
Ok, I will upload the code to my GitHub in a week.
https://github.com/jacobgil/vit-explain
Sorry, I still don't understand. The output shape of the nystrom attention (when return_attn=True), is [B, num_head, x, x] (where I don't know what x is). Do you have the code for this?
https://github.com/jacobgil/vit-explain, watch this.
Can you provide your code about visualization part?thanks very much!
Ok, I will upload the code to my GitHub in a week.
Wow!Thanks a lot!
First, get the self attention matrix of nystrom attention (set the return_attn =True). Then take out the column corresponding to the class token, that is, the attention of the class token to the rest of the feature tokens. Finally, we follow the CLAM framework.