hila-chefer / Transformer-MM-Explainability

[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
MIT License
801 stars 107 forks source link

Update clip.py #27

Closed josh-freeman closed 1 year ago

josh-freeman commented 1 year ago

Add more models (as seen in https://colab.research.google.com/github/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP_explainability.ipynb#scrollTo=7YYjztv3Nn9V)

hila-chefer commented 1 year ago

Thanks! Merge confirmed