When computing saliency maps (and likely also GradCAMs), the returned gradients are always normalized to the range (0, 1). Since this is an affine transformation, there is no way to reproduce the exact values of the gradients, since the information about zero values is lost. For instance, we may wish to compare the true gradient values to know which pixels in an image increase a class score versus decrease it, and the relative magnitude of those things. Right now, we can get the negative values and the positive values separately, but I don't think we can actually infer the relative magnitudes.
I think this would likely be an easy fix. Perhaps you could add a boolean keyword argument for normalizing the data or not prior to output.
When computing saliency maps (and likely also GradCAMs), the returned gradients are always normalized to the range (0, 1). Since this is an affine transformation, there is no way to reproduce the exact values of the gradients, since the information about zero values is lost. For instance, we may wish to compare the true gradient values to know which pixels in an image increase a class score versus decrease it, and the relative magnitude of those things. Right now, we can get the negative values and the positive values separately, but I don't think we can actually infer the relative magnitudes.
I think this would likely be an easy fix. Perhaps you could add a boolean keyword argument for normalizing the data or not prior to output.