hila-chefer / Transformer-MM-Explainability

[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
MIT License
801 stars 107 forks source link

torch.nn.modules.module.ModuleAttributeError: 'ResidualAttentionBlock' object has no attribute 'attn_probs' #39

Open TongLi97 opened 8 months ago

TongLi97 commented 8 months ago

Execute line of code: num_tokens = image_attn_blocks[0].attn_probs.shape[-1]

An error message appears: torch.nn.modules.module.ModuleAttributeError: 'ResidualAttentionBlock' object has no attribute 'attn_probs'

What problem causes this?

OHaiYo-lzy commented 6 months ago

have the same problem here

che011 commented 4 months ago

If you're using the original openai's clip, do use this git repo's clip folder. In their clip/model.py line 184 they defined a new attribute attn_probs that the original clip repo did not have. Hope this helps!

asPagurus commented 4 months ago

You may git clone current CLIP (place it instead of CLIP directory inside this module) and patch it with attached patch-file. There are new file inside clip directory and some addition in module.py But good result you may get only for old models - for example for ViT-B/32. New models show many noise(( and I don't know why (for now) If you want run notebook with clip examples you have to copy images from current CLIP folder to new patch0.patch

Marverlises commented 3 months ago

If you're using the original openai's clip, do use this git repo's clip folder. In their clip/model.py line 184 they defined a new attribute attn_probs that the original clip repo did not have. Hope this helps!

From this-https://github.com/openai/CLIP/blob/main/clip/model.py ? I didn't find the replaced one, can you tell in detail? thanks

che011 commented 3 months ago

Use this repo’s clip at this link: https://github.com/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP

The model file I was mentioning would be here: https://github.com/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP/clip/model.py

Marverlises commented 3 months ago

Use this repo’s clip at this link: https://github.com/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP

The model file I was mentioning would be here: https://github.com/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP/clip/model.py

ok thank u!

zqs010908 commented 2 months ago

I used clip at this link: https://github.com/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP but an error message appears: AttributeError: 'NoneType' object has no attribute 'shape'