Closed annahedstroem closed 12 months ago
Hey Anna, thanks for the issue. The paper states in Figure 3:
Each sensitivity map is obtained by applying Gaussian noise $\mathcal{N}(0, \sigma^2)$ to the input pixels for 50 samples, and averaging them. The noise level corresponds to $\sigma / (x\text{max} − x\text{min})$.
Since we need to compute the standard deviation of the Gaussian $\sigma$ from a known noise level $\varepsilon$, we have thus $$\varepsilon = \frac{\sigma}{(x\text{max} - x\text{min})} \iff \sigma = \varepsilon \cdot (x\text{max} - x\text{min})$$
Hey @chr5tphr thanks so much for clarifying! and great that it wasn't a bug (but a misinterpretation from me!) I'll go ahead and close the issue. :D
Hi,
Thanks for a great library!
There might be a discrepancy between the implementation and the original paper when it comes to how noise is calculated https://arxiv.org/pdf/1706.03825.pdf.
https://github.com/chr5tphr/zennit/blob/60a2c088a29fb0a68ed63f596858d0b9becff374/src/zennit/attribution.py#L356
i.e., it should be $std = \frac{sigma}{(xmax-xmin)}$ and not $std = sigma * (xmax-xmin)$.