hila-chefer / Transformer-Explainability

[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
MIT License
1.75k stars 232 forks source link

Question about Conv2D relprop #26

Closed jhb115 closed 2 years ago

jhb115 commented 3 years ago
Screen Shot 2021-09-06 at 5 14 46 PM

Hi, By the way very awesome work on LRP!! I have a question on the relprop function of Conv2D. Is there a reason behind using L = self.X 0 + .... similarly for H = self.X 0 + ....? (see attached screen shot) Is it to simply broadcast scalar to the shape of self.X? (i.e. identical to torch.zeros_like(self.X) + .... )

hila-chefer commented 3 years ago

Hi @jhb115, thanks for your interest in our work!

Specifically, the implementation for LRP’s conv layers was taken from this repo but it looks like it is equivalent to torch.zeros :)

hila-chefer commented 2 years ago

@jhb115 closing due to inactivity, please re-open if necessary