hila-chefer / Transformer-Explainability

[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
MIT License
1.75k stars 232 forks source link

Fix LRP visualization #53

Open daniellecn03 opened 1 year ago

daniellecn03 commented 1 year ago

Existing code used the lrp instance instead of orig_lrp, which led to the same attribution as the transformer attribution. Following this change the produced attributions are different as expected.

Also updated the transformer visualization to use method="transformer_attribution" instead of the legacy method="grad". This change has not effect, and was made to align the visualization code with the rest of the repo for consistency.