raghakot / keras-vis

Neural network visualization toolkit for keras
https://raghakot.github.io/keras-vis
MIT License
2.97k stars 664 forks source link

TypeError: unsupported operand type(s) for *: 'int' and 'NoneType' #186

Open DaQiZi opened 5 years ago

DaQiZi commented 5 years ago
from keras.applications import  VGG16
from vis.utils import  utils
from keras import  activations

from  vis.visualization import  visualize_activation
# from vis.backend import  sel

model = VGG16(weights="imagenet",include_top=False)

layer_idx = utils.find_layer_idx(model, 'block5_conv3')

img = visualize_activation(model,layer_idx,filter_indices=20)

I tried to run the above code,But I got the following error

I am running on Windows10, with keras-vis version 0.5.0

I don't know where I made a mistake. I hope some kind people can help me. Thank you very much

Traceback (most recent call last): File "D:/DLCode/amdemo/layerAMVersion2.py", line 12, in visualize_activation(model,layer_idx,filter_indices=20) File "D:\softwave\anaconda\lib\site-packages\keras_vis-0.5.0-py3.6.egg\vis\visualization\activation_maximization.py", line 112, in visualize_activation seed_input, input_range, *optimizer_params) File "D:\softwave\anaconda\lib\site-packages\keras_vis-0.5.0-py3.6.egg\vis\visualization\activation_maximization.py", line 42, in visualize_activation_with_losses opt = Optimizer(input_tensor, losses, input_range, wrt_tensor=wrt_tensor) File "D:\softwave\anaconda\lib\site-packages\keras_vis-0.5.0-py3.6.egg\vis\optimizer.py", line 52, in init loss_fn = weight loss.build_loss() File "D:\softwave\anaconda\lib\site-packages\keras_vis-0.5.0-py3.6.egg\vis\regularizers.py", line 101, in build_loss return normalize(self.img, value) File "D:\softwave\anaconda\lib\site-packages\keras_vis-0.5.0-py3.6.egg\vis\regularizers.py", line 24, in normalize return output_tensor / np.prod(image_dims) File "D:\softwave\anaconda\lib\site-packages\numpy\core\fromnumeric.py", line 2772, in prod initial=initial) File "D:\softwave\anaconda\lib\site-packages\numpy\core\fromnumeric.py", line 86, in _wrapreduction return ufunc.reduce(obj, axis, dtype, out, *passkwargs) TypeError: unsupported operand type(s) for : 'int' and 'NoneType'

keisen commented 5 years ago

Hi, @DaQiZi . Can I ask a question about your code.

model = VGG16(weights="imagenet",include_top=False)

Is include=False what you intended?

keisen commented 5 years ago

When include_top=True , the input shape of model is (?, 224, 224, 3) . But when include_top=False , the input shape is (?, ?, ?, 3) . i.e., It's unfixed.

visualize_activation is a function that create a input image that maximize the loss. So, it need a model whose input shape is fixed.

DaQiZi commented 5 years ago

When include_top=True , the input shape of model is (?, 224, 224, 3) . But when include_top=False , the input shape is (?, ?, ?, 3) . i.e., It's unfixed.

visualize_activation is a function that create a input image that maximize the loss. So, it need a model whose input shape is fixed.

Thank you. I got it. My goal is, I want the output of a certain layer of the model to be as close as possible to the specified value

image

In other words, I want to customize a loss function, where g(xi) of the loss function refers to the output of a convolution layer, not the output of a filter of a convolution layer.I didn't get very good results with Max activation myself, so I used this keras-vis package.Can the keras-vis package do this?I don't know much about it

keisen commented 5 years ago

My goal is, I want the output of a certain layer of the model to be as close as possible to the specified value

It seems that normal loss-functions like keras.losses.* will achieve it . Do I exactly make out it? If not, Could you please explain us the specified value .

LContini91 commented 4 years ago

I'm encountering the same issues. I am using a VGG16 model with include_top=False because I need a Globale Average Pooling layer at the end, followed by a Dense layer with 1 output neuron for a binary classification.

After I train it and want to visualise the activations with the following code: from vis.utils import utils from keras import activations

# Build the VGG16 network with ImageNet weights

# Utility to search for layer index by name. # Alternatively we can specify this as -1 since it corresponds to the last layer. layer_idx = utils.find_layer_idx(model, 'dense_4')

# Swap softmax with linear model.layers[layer_idx].activation = activations.linear model = utils.apply_modifications(model)

And: from vis.visualization import visualize_activation

plt.rcParams['figure.figsize'] = (18, 6)

img2 = visualize_activation(model, layer_idx) plt.imshow(img2)

I get this Error message: TypeError: unsupported operand type(s) for *: 'int' and 'NoneType'

DaQiZi commented 4 years ago

My goal is, I want the output of a certain layer of the model to be as close as possible to the specified value

It seems that normal loss-functions like keras.losses.* will achieve it . Do I exactly make out it? If not, Could you please explain us the specified value .

The formula I'm talking about is actually inversion. I noticed that the author tagged the inversion to be ready, but I couldn't find it.