albermax / innvestigate

A toolbox to iNNvestigate neural networks' predictions!
Other
1.25k stars 233 forks source link

Comparable plots of different explanation methods #215

Closed mhan42 closed 4 years ago

mhan42 commented 4 years ago

Hi,

thank you for this awesome project!

When comparing different explanation methods with innvestigate one recognizes that outputs resulting from different explanation methods are plotted fairly differently (i.e. background color, color of pixel highlighting, ...). In addition, methods are presented differently than in other projects, e.g. the LRP methods look differently than the outputs shown on the LRP server from HHI. Is there a specific reason for this (other than for implementation reasons, of course)? And what do you propose in order to establish maximal comparability?

Many thanks in advance!

vitorbds commented 4 years ago

I have the same doubts as you, but according to the articles found in:

Explainable AI:Interpreting, Explaining and Visualizing Deep Learning

I see that in general for LRP the red color shows high relevance that this pixel had in the RN's decision, white would be intermediate, and blue very little relevance.

I think u can have some control in how it will be plot , changing some parameters of plot. So blue is how close to -1 , red close to 1 and white close to 0 . For matplot : cmap="seismic", clim=(-1,1))

But in fact, when changing the methods, this analysis is not consistent. I would also be grateful if someone explained it with more property.

rachtibat commented 4 years ago

Hi,

Thank you for your input! I think we have to consider several things:

  1. The resulting heatmap strongly depends on WHICH LRP method you use. I recommend the paper Towards Best Practice in Explaining Neural Network Decisions with LRP. You could take the heatmaps there as a reference.

  2. if you plot the analysis of the image, you have to choose the right parameters for matplotlib.pyplot as vitorbds suggested. I have written a small function which is plotting positive relevance in the heatmap red, negative blue and zero values white.

  3. very important: if you have an image with several channels, I recommend adding them up. If you want to see which contribution each channel has, you could plot each channel on its own.

import matplotlib.pyplot as plt
import matplotlib.colors as colors

def plot_analysis(x, cmap=None):
    image = np.sum(x, axis=-1)
    min = image.min()
    max_abs = abs(image[:,:]).max()
    image[:,:] = image[:,:] / max_abs

    norm = colors.Normalize(vmin=-1, vmax=1)
    plt.imshow(image, norm=norm, cmap=cmap)

plot_analysis(LRP_output, cmap="seismic")

I hope it helps

albermax commented 4 years ago

Hi everyone,

the visualization of explanation methods (not just LRP) is not consistent across projects and it is important to know how a result got visualized. Typically explanation methods yield values for all rgb channels and to make it easier to interpret these values are somehow combined. There is no wrong or right and thus you need to choose what fits your needs better.

Cheers, Max