hila-chefer / Transformer-Explainability

[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
MIT License
1.75k stars 232 forks source link

Question about Conv2D relprop #55

Closed rtyu-1 closed 1 year ago

rtyu-1 commented 1 year ago

Hi, thank you for the outstanding work!! I have a question on the relprop function of Conv2D. image Based on the formula for LRP in the paper corresponding to the repository you provided in Issue #26, the current implementation appears to be incorrect. This cannot guarantee conservativeness. The current implementation is as follows: image Such an implementation is equivalent to: image A implementation that I believe is correct is as follows: image Looking forward to your reply!!

rtyu-1 commented 1 year ago

Sorry, the current implementation is equivalent to: image This will result in the total sum of R after propagation being twice the sum before propagation.