issues
search
hila-chefer
/
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
MIT License
801
stars
107
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
How to use these examples to visualize the Grounding Dino or GLIP?
#41
meaquanana
opened
1 week ago
0
No negative word importance
#40
Faiail
opened
7 months ago
0
torch.nn.modules.module.ModuleAttributeError: 'ResidualAttentionBlock' object has no attribute 'attn_probs'
#39
TongLi97
opened
8 months ago
7
ssl error
#38
fzb408
opened
10 months ago
0
How to apply this work on google/vit model from hugging face ?
#37
MohieEldinMuhammad
opened
1 year ago
1
save_visual_results in visualBERT
#36
guanhdrmq
opened
1 year ago
0
fix memory leak
#35
lenbrocki
opened
1 year ago
0
Applicability to decoder transformers with causal mask
#34
Aamer98
closed
1 year ago
0
when trying to use the colab notebook for RN50 im getting AttributeError: 'ModifiedResNet' object has no attribute 'transformer'
#33
jiheddachraoui
opened
1 year ago
0
self.attn_probs in ResidualAttentionBlock() causes problems - how to make explainability work with mlfoundations / open_clip model
#32
tahirmiri
opened
1 year ago
0
Is this really using the technique from the publication?
#31
entrity
closed
1 year ago
1
Checking how well this works with Segment Anything?
#30
nahidalam
closed
1 year ago
1
Use non hacked models
#29
josh-freeman
closed
1 year ago
1
Readability of CLIP notebook
#28
josh-freeman
opened
1 year ago
4
Update clip.py
#27
josh-freeman
closed
1 year ago
1
Application to Sparse/Low-Rank Attention Matrices
#26
FarzanT
closed
1 year ago
1
CVE-2007-4559 Patch
#25
TrellixVulnTeam
opened
2 years ago
0
Question about the CLIP Demo
#24
Hoyyyaard
closed
2 years ago
1
ImportError: No module named lxmert.lxmert.src.tasks
#23
CaffreyR
closed
2 years ago
2
Using the methods for a custom architecture
#22
nelaturuharsha
closed
2 years ago
4
Problems with running it in Google colab
#21
songhuadan
closed
2 years ago
2
Adds link to the Huggingface demo for CLIP explainability
#20
bpiyush
closed
2 years ago
0
Object detection/Segmentation Explainability
#19
jaiswati
closed
2 years ago
1
Question about the visualization of CLIP‘s text token
#18
Kihensarn
closed
2 years ago
2
Details about the changes in the code of base models
#17
NikhilM98
opened
2 years ago
1
In 6.2 LXMERT, report an error"requests.exceptions.MissingSchema: Invalid URL 'val2014COCO_val2014_000000092107.jpg': No schema supplied. "
#16
Shuai-Lv
closed
2 years ago
1
The data set downloaded automatically is too large
#15
Shuai-Lv
closed
2 years ago
1
How can I choose the method when I run the script ?
#14
Shuai-Lv
closed
2 years ago
2
Generate relevance matrix in ViT of Hugging Face
#13
SketchX-QZY
closed
2 years ago
2
Questions about the Relavance Matrix
#12
Paandaman
closed
2 years ago
2
Batching for CLIP Explainability
#11
Alacarter
closed
2 years ago
2
Questions about CLIP visualization.
#10
tingxueronghua
closed
2 years ago
4
Question about the visualization for VIT?
#9
zhaoxin94
closed
2 years ago
2
CLIP ViT-B/16
#8
sanjayss34
closed
2 years ago
11
COCO 2014 or 2017?
#7
anguyen8
closed
3 years ago
4
Question about Vit
#6
scott870430
closed
2 years ago
6
when trying to use the colab notebook for clip-vit/B16 im getting ModuleAttributeError: 'ResidualAttentionBlock' object has no attribute 'attn_probs'
#5
DanJbk
closed
3 years ago
2
Swin Transformer
#4
KP-Zhang
closed
3 years ago
8
attn_grad
#3
betterze
closed
3 years ago
2
Request for vanilla example notebook
#2
oliverdutton
closed
3 years ago
7
Question about the CLIP Demo
#1
g-luo
closed
3 years ago
2