I have a conceptual question based on the mnist example usage notebook: https://github.com/raghakot/keras-vis/blob/master/examples/mnist/attention.ipynb. I am trying to understand why to use pre-activation output for computing standard saliency maps? This is achieved by changing the activation of the network to linear as in cell 3.
from vis.visualization import visualize_saliency
from vis.utils import utils
from keras import activations
# Utility to search for layer index by name.
# Alternatively we can specify this as -1 since it corresponds to the last layer.
layer_idx = utils.find_layer_idx(model, 'preds')
# Swap softmax with linear
model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx, seed_input=x_test[idx])
The notebook says, "to visualize activation over final dense layer outputs, we need to switch the softmax activation out for linear since gradient of output node will depend on all the other node activations. Doing this in keras is tricky, so we provide utils.apply_modifications to modify network parameters and rebuild the graph.".
Is this a standard component of producing saliency maps, or is this something that can be adopted because it works well in practice? The model output is impacted by final activation, so isn't it reasonable to include that in computing the saliency map?
I have a conceptual question based on the mnist example usage notebook: https://github.com/raghakot/keras-vis/blob/master/examples/mnist/attention.ipynb. I am trying to understand why to use pre-activation output for computing standard saliency maps? This is achieved by changing the activation of the network to linear as in cell 3.
The notebook says, "to visualize activation over final dense layer outputs, we need to switch the softmax activation out for linear since gradient of output node will depend on all the other node activations. Doing this in keras is tricky, so we provide utils.apply_modifications to modify network parameters and rebuild the graph.".
Is this a standard component of producing saliency maps, or is this something that can be adopted because it works well in practice? The model output is impacted by final activation, so isn't it reasonable to include that in computing the saliency map?