raghakot / keras-vis

Neural network visualization toolkit for keras
https://raghakot.github.io/keras-vis
MIT License
2.97k stars 664 forks source link

Cannot get intermediate activations #114

Open wosiu opened 6 years ago

wosiu commented 6 years ago

Keras, Tensorflow, keras-vis from pip I'm trying to visualize_activation_with_losses, however not from original model input.

pred_layer = model.layers[-1] 
target = out
losses = [(MSE(pred_layer, target), 1)]

# vis_layer = model.input  # it's ok when I use this one
vis_layer = model.layers[6].input  # and throw error with this one. This is convolution input after reLU

in_shape = vis_layer.shape.as_list()
in_shape[0] = 1
in_shape = tuple(in_shape)
feed=np.random.normal(size=in_shape)

activ = visualize_activation_with_losses(vis_layer, losses, seed_input=feed,
                                         max_iter=10, verbose=True)

And getting:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'input_1' with dtype float and shape [?,960,672,1]
     [[Node: input_1 = Placeholder[dtype=DT_FLOAT, shape=[?,960,672,1], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

Original input of the model is indeed with shape [?,960,672,1]. However I would like to get intermediate images for each activation before convolution. Head of my architecture looks like this:

Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 960, 672, 1)       0         
_________________________________________________________________
input_mean_normalization (La (None, 960, 672, 1)       0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 960, 672, 16)      160       
_________________________________________________________________
batch_normalization_1 (Batch (None, 960, 672, 16)      64        
_________________________________________________________________
leaky_re_lu_1 (LeakyReLU)    (None, 960, 672, 16)      0         
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 480, 336, 16)      0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 480, 336, 32)      4640      
_________________________________________________________________
etc...

and as you noticed in code above, I want to get 16 activations of size 480x336 (like an image with 16 channels) which get into 6th layer (Conv).

keisen commented 5 years ago

I created an example with VGG 16. Please refer to below. It is executable.

from keras.applications import VGG16
from keras.models import Model
from keras.layers import Input
from keras import activations

from vis.utils import utils
from vis.losses import ActivationMaximization
from vis.regularizers import TotalVariation, LPNorm
from vis.visualization.activation_maximization import visualize_activation_with_losses

# Load VGG16 model
model = VGG16()

# Replace the activation of top layer
output_layer = model.layers[-1]
output_layer.activation = activations.linear
model = utils.apply_modifications(model)

# top layer
output_layer = model.layers[-1]

# Target intermediate layer
intermediate_layer = model.get_layer('block3_conv1')

losses = [
    (ActivationMaximization(output_layer, 20), 1),
    (LPNorm(intermediate_layer.input), 10),
    (TotalVariation(intermediate_layer.input), 10)
]

result = visualize_activation_with_losses(intermediate_layer.input, losses