raghakot / keras-vis

Neural network visualization toolkit for keras
https://raghakot.github.io/keras-vis
MIT License
2.98k stars 660 forks source link

Help needed with visualize_cam and conv1d based architecture #76

Open ggSQRT opened 7 years ago

ggSQRT commented 7 years ago

Hi!

I already trained a CNN with the following architecture

Layer (type) Output Shape Param #
conv1d_1 (Conv1D) (None, 101, 50) 2050
max_pooling1d_1 (MaxPooling1 (None, 101, 50) 0
dropout_1 (Dropout) (None, 101, 50) 0
flatten_1 (Flatten) (None, 5050) 0
dense_1 (Dense) (None, 200) 1010200
dropout_2 (Dropout) (None, 200) 0
dense_2 (Dense) (None, 100) 20100
dropout_3 (Dropout) (None, 100) 0
dense_3 (Dense) (None, 50) 5050
dropout_4 (Dropout) (None, 50) 0
dense_4 (Dense) (None, 2) 102

The input is composed of equally sized sequences from the DNA alphabet using one-hot encoding. Therefore as an example, sequence "ATGC" will be transformed to [[1,0,0,0], [0,1,0,0], [0,0,1,0], [0,0,0,1]]. Shape of input is (101, 4, np.float32). There are also two classes of sequences [1,0] (positive) and [0,1] (negative).

My objective is to start from a "random" sequence and by applying grad-cam, adjust the frequency of each letter at every position ending up with a consensus sequence that maximizes the positive class prediction. If this initial random sequence had 4 letters, then its one-hot encoding would start with [[0.25,0.25,0.25,0.25], [0.25,0.25,0.25,0.25], [0.25,0.25,0.25,0.25], [0.25,0.25,0.25,0.25]].

This is my script so far (the part that is related to this objective).

layer_name = 'dense_1' layer_idx = [idx for idx, layer in enumerate(model.layers) if layer.name == layer_name][0]

results = visualize_cam(model, layer_idx, [0], random_sequence_one_hot, penultimate_layer_idx=0)

print results

The output is

[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.

                                  1. 0.
                                  1. 0.
                                  1. 0.
                                  1. 0.
                    1. 0.] <type 'numpy.ndarray'>

So, is my filter_indices parameter correctly assigned in the function call? I'm still a little bit confused about that specific param.

Do you think that i'm doing something wrong or should i approach this differently?

Thanks a lot!

raghakot commented 7 years ago

To do what you describe, you should use dense_4 and maximize [1] for that layer. grad-CAM won't be a good fit for this since the penultimate layer is all the way in the beginning. There are a lot of dense layers in between so your options are: