hila-chefer / Transformer-Explainability

[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
MIT License
1.75k stars 232 forks source link

How to evaluate transformer explainability methods & Compare with other XAI methods #38

Closed jaiswati closed 2 years ago

jaiswati commented 2 years ago

Hey @hila-chefer ,

Congratulations on this great work 1) Could you pls suggest the evaluation framework for XAI methods for vision transformers? 2) is there any wrapper in the existing code base for other XAI methods like Integrated gradients/Smoothgrad etc?

Best, @jaiswati

hila-chefer commented 2 years ago

Hi @jaiswati, thanks for your interest!

  1. I'm not familiar with a framework that includes all methods.
  2. We didn't implement Integrated gradients/Smoothgrad in this work, but all the baselines in the paper are reproducible in this repo (our variation of GradCAM, LRP, partial LRP, raw attention, and rollout).

I hope this helps, Hila.