hila-chefer / Transformer-Explainability

[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
MIT License
1.75k stars 232 forks source link

Seeking Assistance: Adapting Explainability Technique for Longformer Model in Text Classification #69

Open ibrahimAlaaeddine01 opened 3 months ago

ibrahimAlaaeddine01 commented 3 months ago

Hello,

I am currently pursuing my Master's Thesis at Sorbonne University, focusing on Explainable AI within the realm of text classification. At present, I am using a Longformer model for text classification And trying to explain my longformer model using your explainability technique However, I've encountered a roadblock in adapting this code to suit the Longformer architecture. Specifically, I am seeking guidance on how to modify the existing code to align with the Longformer Bert that I am employing (https://huggingface.co/abazoge/DrLongformer). If you have any insights on how I can make this adaptation successfully or if you can suggest alternative methods for explaining Longformer models with attention mechanisms, I would greatly appreciate your input.

Thank you in advance for any assistance you can provide.