pytorch / captum

Model interpretability and understanding for PyTorch
https://captum.ai
BSD 3-Clause "New" or "Revised" License
4.9k stars 492 forks source link

Allow for unnormalized attribution heatmap #1103

Open siemdejong opened 1 year ago

siemdejong commented 1 year ago

Currently, when using captum.attr.visualization.visualize_image_attr, attributions are always normalized: https://github.com/pytorch/captum/blob/2c9dcc1b31400eaa32ed6a5b0e5ca4c7ec0c3741/captum/attr/_utils/visualization.py#L263.

I have a use case where I would like to show the positive and negative signs (so set sign='all'), but show the colors directly corresponding to the colormap, not after first normalizing. I need this to compare multiple attribution heatmaps generated with occlusion, concerning a convolutional neural network for regression.

The attributions in the heatmap below have different minimum and maximum values before normalizing, but normalizing the attributions make it seem some parts of equal importance, while in fact they are not.

Am I right that currently the only possible way of achieving this is to not use visualize_image_attr and define a custom function that does not perform normalization?

image

NarineK commented 1 year ago

@siemdejong, visualize_image_attr is an example function for visualization and it looks like we are performing normalization, by default. We could make it optional but at this point if you don't want to have normalization you can copy our function and make necessary adjustments for you. We can make normalization optional as well. You can also set outlier percentage to 0 to not crop out the outliers. cc: @vivekmig

siemdejong commented 1 year ago

I ended op making my own function to make the grid where attributions were not normalised, which was not too difficult.

This issue can be closed, unless more people find this feature useful.