hila-chefer / Transformer-Explainability

[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
MIT License
1.75k stars 232 forks source link

Couldn't find corresponding code with 《Transformer Interpretability Beyond Attention Visualization》 #27

Closed Yung-zi closed 2 years ago

Yung-zi commented 2 years ago

Hi,

Thanks for sharing the code

I have been testing your implementation, however I couldn't find corresponding code about AUC curve.

hila-chefer commented 2 years ago

Hi @Yung-zi, thanks for your interest in our work! The perturbation tests will provide you with the accuracy per each step (i.e. removing 0%, 10%,..., 90% of the tokens). To calculate the AUC given this result, simply use np.trapz for the results. I hope this helps.

hila-chefer commented 2 years ago

Closing due to inactivity