Closed HugoTex98 closed 2 years ago
you can use the preimplemented LRPPreset* analyzers
Thank you for answering @sebastian-lapuschkin !
I just have one doubt by using LRPPreset analyzer. If I only want to see the positive contributions of my input variables, I should use LRPPresetA right? And also, is it ok by using it with selu activations? I saw in relevance_analyzer.py that it is not advised...
1) you could clamp your attributions at 0 and only keep the positive part
2) try it out. I would assume that it works
When I use LPRPresetA or LPRPresetB without a specific neuron selection it works fine but when I specify a neuron (I have 2 in the output) the relevance scores are negative (i.e -3.1056861e-06). Is there any reason for this to happen?
negative relevance scores for both presents are no indicators that things are not working fine, cf this paper (alt link) for example, where in the examples in fig1 and the appendix blue regions also have attributed negative relevance (read, in the heatmap wrt class cat: "from the model's point of view, bernese mountain dog facial features are not cat tiger features, ie provide evidence to the model for deciding against class tiger cat")
especially if the output on the non-dominant logits is negative (which is your case, and is likely if the model has decided otherwise) negative relevance reveals that the model does not decide for your class of choice represented by your selected output neuron because "all that stuff does not look like the neuron's target class"
if this does not illuminate your situation sufficienty, please provide some more info regarding decomposed model output (ie output neuron activation) and resulting heatmap in input space (or whatever feature space you are analyzing)
best
Closing this issue as the missing example is tracked in #261.
Hello everyone.
I was trying to implement the composite LRP like the one presented in: G. Montavon, A. Binder, S. Lapuschkin, W. Samek, K.-R. MüllerLayer-wise Relevance Propagation: An Overviewin Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer LNCS, vol. 11700,2019, but unsuccessful...
Anyone knows how can I implement this?
Here is my model:
`
Thank you!