albermax / innvestigate

A toolbox to iNNvestigate neural networks' predictions!
Other
1.25k stars 234 forks source link

Bug in LRP alpha-beta rule #273

Open nkoenen opened 2 years ago

nkoenen commented 2 years ago

Hi, I think you have a bug in the calculation of the alpha-beta rule. Consider the following example:

import tensorflow
import keras as k
import innvestigate
import innvestigate.utils as iutils
import numpy as np
import time

k.backend.clear_session()

model = k.Sequential(
    [
        k.layers.Dense(1, activation="softmax", input_shape = [2],
        weights = [np.array([[1],[-1]]), np.array([-1])])
    ]
)

inputs = np.array([[1,1]])

model = iutils.keras.graph.model_wo_softmax(model)
analyzer = innvestigate.create_analyzer("lrp.alpha_1_beta_0", model)
analyzer.analyze(inputs)
#> array([[-10000000.,         0.]], dtype=float32)
analyzer = innvestigate.create_analyzer("lrp.alpha_2_beta_1", model)
analyzer.analyze(inputs)
#> array([[-2.e+07,  5.e-01]], dtype=float32)

But as far as I understood the rule, (-1, 0) or (-2, 0.5) should come out there.

Best, Niklas