hila-chefer / Transformer-Explainability

[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
MIT License
1.75k stars 232 forks source link

Question about layers_ours.py #29

Closed Mfun82 closed 2 years ago

Mfun82 commented 2 years ago

Hi! When I use for my own ViT model, the following problem has occurred: torch.nn.modules.module.ModuleAttributeError: 'Linear' object has no attribute 'X' I don't know why😢Could you please help me? Thank you!

hila-chefer commented 2 years ago

Hi @Mfun82, thanks for your interest in our work!

What implementation are you using? Is it identical to the one we use? you can also consider using our second paper (from ICCV21') where we eliminated the use of LRP.

Best, Hila

hila-chefer commented 2 years ago

@Mfun82 closing this issue due to inactivity. Please reopen if necessary.