raghakot / keras-vis

Neural network visualization toolkit for keras
https://raghakot.github.io/keras-vis
MIT License
2.97k stars 664 forks source link

Saliency for one-dimensional input #93

Open ghost opened 6 years ago

ghost commented 6 years ago

I have multiple samples of one-dimensional inputs being classified in two classes, 0 and 1. I would like to know which parts of the signal are responsible for each of these classes, respectively, and how high is the influence of the signal parts.

A simple Keras CNN is used as classification model, and the saliency computed in the following way:

layer_idx = -1
model_final.layers[layer_idx].activation = activations.linear
model_final = utils.apply_modifications(model_final)

beispiel = (x_test[0, :, :]).astype(dtype=float)

print(np.shape(beispiel))   # (2600, 1)

grads = visualize_saliency(model_final, layer_idx, filter_indices=None, seed_input=beispiel, backprop_modifier='guided')

Question 1: The imshow used in the examples does not work for 1D samples, so I tried plotting it differently. I am however unsure if I should compute the mean over the grads dimensions, or how should I treat the 3 dimensions ofgrads. At the moment I have:

m = np.mean(grads, axis=1)
print(m)
print(np.shape(m))

x = np.arange(15, 275) * 0.2
plt.plot(x, beispiel[-260:, :], label='original data')
plt.plot(x, m[-260:], label='Saliency')
plt.legend()
plt.show()

Question 2: How should I proceed to do it label-specific, i.e. which signal parts are responsible for each label?

karthikbmk commented 6 years ago

Hi,

To visualize gradients for 1D inputs, follow these steps:

  1. Open the keras-viz's saliency.py file:
  2. In visualize_saliency_with_losses function, replace as follows: Replace *return np.uint8(cm.jet(grads)[..., :3] 255)[0] with return grads[0]**