hila-chefer / Transformer-MM-Explainability

[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
MIT License
801 stars 107 forks source link

Checking how well this works with Segment Anything? #30

Closed nahidalam closed 1 year ago

nahidalam commented 1 year ago

I am wondering if there is any ongoing effort on checking how well this technique works with Segment Anything?

hila-chefer commented 1 year ago

Hi @nahidalam thanks for your interest! We have not yet attempted to apply our technique on segment anything, could be interesting to see the results. Will update if we do :)