richardbaihe / paperreading

NLP papers
MIT License
2 stars 0 forks source link

AAAI 2021 | Self-Attention Attribution: Interpreting Information Interactions Inside Transformer #61

Closed richardbaihe closed 3 years ago

richardbaihe commented 3 years ago

This paper proposes to visualize attribute weights instead of attention weight to analyze tokens' importance for transformer-based NLP models. The attribute weights are defined as below:

image

Results:

image