albermax / innvestigate

A toolbox to iNNvestigate neural networks' predictions!
Other
1.26k stars 233 forks source link

LRP Preset A with "negative" relevance #256

Closed acv132 closed 2 years ago

acv132 commented 3 years ago

Hello,

I am using the LRP preset A for CNNs and I was wondering why I obtain heatmaps with negative (blue) relevance when the applied rules are alpha-beta (with alpha =1, beta = 0) and epsilon rule? This is how I initiate my analyzer: analyzer = innvestigate.create_analyzer('lrp.sequential_preset_a', model=model, allow_lambda_layers=True)

Thank you in advance!

sebastian-lapuschkin commented 3 years ago

first, preset A applies epsilon-LRP to dense layers, alphabeta to conv layers. the use of epsilon (I assume in your case) near the output allows to transport negative model outputs. once some neuron has relevance, you do not need to care about its sign, to phrase it bluntly, for further propagation.

second, alphabeta-LRP transports any propagated relevance incoming from upstream neurons, wrt alpha- and beta- weighted pre-activations of the layer. that means, the rule can propagate negative relevance just fine, it just weights the preactivations between layer inputs and outputs differently for the purpose of routing relevance downstream.

also consider, that positive preactivations (leading to potentially firing relu-activated neurons) may provide evidence speaking against your class of interest. these neurons will then usually receive negative relevance.

i hope that helps.

for a more specific answer, I would require info about the model, some example images, heatmaps and model outputs, etc, ie, details.

best

adrhill commented 2 years ago

Since there is no follow-up question, I'm assuming this issue has been resolved.