raghakot / keras-vis

Neural network visualization toolkit for keras
https://raghakot.github.io/keras-vis
MIT License
2.98k stars 660 forks source link

Multiple inbound nodes error when visualising dense softmax #37

Open dgorissen opened 7 years ago

dgorissen commented 7 years ago

Im using the Keras VGG16 net with a custom top layer (4 class softmax)

Loaded model with layers: 
[u'input_2', u'block1_conv1', u'block1_conv2', u'block1_pool', u'block2_conv1', u'block2_conv2', u'block2_pool', u'block3_conv1', u'block3_conv2', u'block3_conv3', u'block3_pool', u'block4_conv1', u'block4_conv2', u'block4_conv3', u'block4_pool', u'block5_conv1', u'block5_conv2', u'block5_conv3', u'block5_pool', u'flatten_2', u'dense_2', u'dropout_2', u'predictions']

However, trying to visualise the filters for the top level (Dense) softmax layer ('predictions') is throwing an error on:

tot_filters = get_num_filters(layer)

 File "/usr/local/lib/python2.7/site-packages/keras_vis-0.3-py2.7.egg/vis/visualization.py", line 30, in get_num_filters
    isDense = K.ndim(layer.output) == 2
  File "/Users/dgorissen/Library/Python/2.7/lib/python/site-packages/keras/engine/topology.py", line 933, in output
    ' has multiple inbound nodes, '
AttributeError: Layer predictions has multiple inbound nodes, hence the notion of "layer output" is ill-defined. Use `get_output_at(node_index)` instead.

Hardcoding the value to 4 results in the same error later on:

   img = visualize_activation(model, layer_idx, filter_indices=[idx])
  File "/usr/local/lib/python2.7/site-packages/keras_vis-0.3-py2.7.egg/vis/visualization.py", line 108, in visualize_activation
    opt = Optimizer(model.input, losses)
  File "/usr/local/lib/python2.7/site-packages/keras_vis-0.3-py2.7.egg/vis/optimizer.py", line 35, in __init__
    loss_fn = weight * loss.build_loss()
  File "/usr/local/lib/python2.7/site-packages/keras_vis-0.3-py2.7.egg/vis/losses.py", line 76, in build_loss
    layer_output = self.layer.output
  File "/Users/dgorissen/Library/Python/2.7/lib/python/site-packages/keras/engine/topology.py", line 933, in output
    ' has multiple inbound nodes, '
AttributeError: Layer predictions has multiple inbound nodes, hence the notion of "layer output" is ill-defined. Use `get_output_at(node_index)` instead.

Looking at the docs and googling around it would seem that this should just work. What am I missing?

Edit: My overall model is a Sequential() model stuck on top of the base VGG model. The above output is after flattening the overall model into a single Sequential() model with a flat list of layers. However that seems to break things for after flattening, trying to visualise block5_conv1 gives the same error. Without flattening it works fine.

So I guess my question is, how best to deal with nested models, or get the overall model into a shape I can visualise the last dense layer.

raghakot commented 7 years ago

Can you post a gist of your code? It looks as though the output has two inbound inputs. I have to make some changes to support multi input/outptut. Having a gist would help me verify as well.

dgorissen commented 7 years ago

Thanks for the response. I will try refactor/clean into a self contained example but fundamentally Im not doing anything complicated. Essentially this, adding a new Sequential Model on top of a pre-trained base: https://gist.github.com/fchollet/7eb39b44eb9e16e59632d25fb3119975

dgorissen commented 7 years ago

Ok, here is is an example. Im guessing you would need to add support for visualising models which have been stacked in this way by recursing.

from __future__ import division, print_function
from os import path
import numpy as np
from vis.utils import utils
from vis.visualization import visualize_activation, get_num_filters
from keras.models import Model, Sequential
from keras.layers import Dropout, Flatten, Dense, Input
from keras import applications
from PIL import Image

# Base model
input_tensor = Input(shape=(100, 100, 3))
base_model = applications.VGG16(weights='imagenet',
                                include_top=False,
                                input_tensor=input_tensor)

# New top layer(s)
input_shape = base_model.output_shape[1:]
top_model = Sequential()
top_model.add(Flatten(input_shape=input_shape))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(5, activation='softmax', name='predictions'))

# Combine base and top
model = Model(inputs=[base_model.input],
              outputs=[top_model(base_model.output)])

# Print summary
model.summary()

##### This works
layer_name = 'block5_conv1'

##### How to visualise the top softmax?
# This layer exists but needs to be unpacked so it fails
#layer_name = "sequential_1"

# Ideally you one can do this:
#layer_name = "sequential_1/predictions/Softmax"

layer, layer_idx = [(l, idx) for idx, l in enumerate(model.layers) if l.name == layer_name][0]

tot_filters = get_num_filters(layer)

filters = np.arange(tot_filters)
vis_images = []
for idx in filters[:2]:
    img = visualize_activation(model, layer_idx, filter_indices=[idx])
    img = utils.draw_text(img, str(idx))
    vis_images.append(img)

fn = "filters_%s.png" % layer_name
stitched = utils.stitch_images(vis_images, cols=8)
im = Image.fromarray(stitched)
im.save(fn)

I thought I could work around this by flattening the layers:


def flatten_model(model):
    layers_flat = []
    for layer in model.layers:
        try:
            layers_flat.extend(layer.layers)
        except AttributeError:
            layers_flat.append(layer)

    model_flat = Sequential(layers_flat)
    return model_flat

But this, unfortunately, also gives errors.

dgorissen commented 7 years ago

Any suggested way forward on this?

raghakot commented 7 years ago

Sorry, I haven't looked at this yet. Will get to it this week,

dgorissen commented 7 years ago

No worries at all. I started editing to create a patch but didnt find enough time to complete properly.

8enmann commented 7 years ago

I'm seeing the same issue with a sequential model

8enmann commented 7 years ago

This let the code run:

def visualize_activation(model, layer_idx, filter_indices=None,
                         seed_input=None, input_range=(0, 255),
                         backprop_modifier=None, grad_modifier=None,
                         act_max_weight=1, lp_norm_weight=10, tv_weight=10,
                         **optimizer_params):
    if backprop_modifier is not None:
        modifier_fn = get(backprop_modifier)
        model = modifier_fn(model)

    if len(model.inbound_nodes) > 1:
        input_ = model.get_input_at(0)
    else:
        input_ = model_.input

    losses = [
        (ActivationMaximization(model.layers[layer_idx], filter_indices), act_max_weight),
        (LPNorm(input_), lp_norm_weight),
        (TotalVariation(input_), tv_weight)
    ]

    # Add grad_filter to optimizer_params.
    optimizer_params = utils.add_defaults_to_kwargs({
        'grad_modifier': grad_modifier
    }, **optimizer_params)

    return visualize_activation_with_losses(input_, losses, seed_input, input_range, **optimizer_params)
MathiasKahlen commented 7 years ago

Any updates on this? I don't understand the answer from @8enmann - how do I implement that? I am trying to do the same with visualize_cam and visualize_saliency but get the multiple inbound nodes error. Hope you can help.

yuval-harpaz commented 6 years ago

I had a similar error for a network with two nodes of output, dense_1_1/Relu:0 and sequential_2/dense_1/Relu:0 . The solution for me was to go to losses.py and change layer_output = self.layer.output to layer_output = self.layer.get_output_at(-1).

JesperChristensen89 commented 6 years ago

Any updates on this @raghakot? I'm seeing the same thing trying to visualize a MobileNet with custom top.

swapb94 commented 6 years ago

I also got the same error while trying to use layer.output. Even i used the MobileNet pre-trained model.

Rubenkl commented 6 years ago

Resnet5 same problem..

JesperChristensen89 commented 6 years ago

Try building your model without using a sequential model. E.g. for VGG-16 do this;

# get base model
base_model = applications.VGG16(weights='imagenet', include_top=False,input_shape = (img_width, img_height, 3))

# build top model
x = Flatten(name='flatten')(base_model.output)
x = Dense(4096, activation='relu', name='fc1')(x)
x = Dense(4096, activation='relu', name='fc2')(x)
x = Dense(3, activation='softmax', name='predictions')(x)

# stitch together
model = Model(inputs= base_model.input, outputs=x)

# inspect
model.summary()

This builds the model as one unified network rather than two cascading ones.

pcatattacks commented 6 years ago

A hacky fix I found was going into the source code in losses.py, line 76 and changing

layer_output = self.layer.output

to

layer_output = self.layer.outputs[i]

where i is a valid index in the list self.layer.outputs. I'm trying to do something similar to what you are and having trouble too - it seems like you have to refer to the self.layer.outputs list for the specific output you want when you stack models. It's redundant since the list only has one element in it (for me at least) but hopefully this can serve as a temporary fix.

keisen commented 6 years ago

Hi @pcatattacks, I seem, your problem different this issue. Actually, I modified `losses.py' as you written, but could not work around.

I think, the cause of this issue is cascading model, thus the solution of this issue is that ,as @JesperChristensen89 said, it's to build the model as one unified network.

SahuH commented 5 years ago

Tried using trick mentioned by @pcatattacks and @yuval-harpaz, but neither of them worked. I too am using cascaded model, a Sequential() model on top of base InceptionV3 model. Really would not prefer to re-build (as mentioned by @JesperChristensen89 ) and re-train the model.

itsnamgyu commented 5 years ago

@yuval-harpaz 's hack seems to work for me. I'm using a cascaded model with a Sequential model on top of base MobileNetV2 (alpha=0.25).

I had a similar error for a network with two nodes of output, dense_1_1/Relu:0 and sequential_2/dense_1/Relu:0 . The solution for me was to go to losses.py and change layer_output = self.layer.output to layer_output = self.layer.get_output_at(-1).

For those of you who want to try out this method, I'll reformat the code:

Line 76 of losses.py

    #layer_output = self.layer.output
    layer_output = self.layer.get_output_at(-1)
martin-etchart commented 5 years ago

I have the same issue. I managed to make it work by saving my model arch to json and loading it from there instead of sequentially assembling it. I suspect the json save cleans up the inbound node thing... anyhow it worked.

devjaynemorais commented 5 years ago

Could someone help me solve the following problem?

Environment: Keras==1.1.0 Theano==1.0.2 numpy==1.15.1 scipy==1.3.0

I created a fine tuning and frozen all layers except layer [2] because I want to get the activation values only from layer [2].

Network summary before freezing:

Layer (type) | Output Shape | Param # | Connected to


dense_1 (Dense) (None, 512) 2097664 dense_input_1[0][0]


dropout_1 (Dropout) (None, 512) 0 dense_1[0][0]
dense_1[0][0]


dense_2 (Dense) (None, 32) 16416 dropout_1[0][0]
dropout_1[1][0]


dropout_2 (Dropout) (None, 32) 0 dense_2[0][0]
dense_2[1][0]


dense_3 (Dense) (None, 1) 33 dropout_2[0][0]
dropout_2[1][0]


Total params: 2114113

Freezing layers:

for layer in model.layers[0:]: ------ layer.trainable = False model.layers[2].trainable = True

Network summary after freezing:

`Layer (type) | Output Shape | Param # | Connected to


dense_1 (Dense) (None, 512) 0 dense_input_1[0][0]


dropout_1 (Dropout) (None, 512) 0 dense_1[0][0]


dense_2 (Dense) (None, 32) 16416 dropout_1[1][0]


dropout_2 (Dropout) (None, 32) 0 dense_2[1][0]


dense_3 (Dense) (None, 1) 0 dropout_2[1][0]


Total params: 16416`

To print layer output [2]:

OutFunc = keras.backend.function([model2.input], [model2.layers[2].get_output_at(0)]) out_val = OutFunc([inputs])[0] print(out_val)

Returns the following output error:

MissingInputError Traceback (most recent call last)

in 1 #OutFunc = keras.backend.function([model2.input], [model2.layers[0].output]) ----> 2 OutFunc = keras.backend.function([model2.input], [model2.layers[2].get_output_at(0)]) 3 4 5 out_val = OutFunc([inputs])[0] ~/anaconda3/lib/python3.7/site-packages/keras/backend/theano_backend.py in function(inputs, outputs, updates, **kwargs) 725 return T.clip(x, min_value, max_value) 726 --> 727 728 def equal(x, y): 729 return T.eq(x, y) ~/anaconda3/lib/python3.7/site-packages/keras/backend/theano_backend.py in __init__(self, inputs, outputs, updates, **kwargs) 711 712 def pow(x, a): --> 713 return T.pow(x, a) 714 715 ~/anaconda3/lib/python3.7/site-packages/theano/compile/function.py in function(inputs, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input) 315 on_unused_input=on_unused_input, 316 profile=profile, --> 317 output_keys=output_keys) 318 return fn ~/anaconda3/lib/python3.7/site-packages/theano/compile/pfunc.py in pfunc(params, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input, output_keys) 484 accept_inplace=accept_inplace, name=name, 485 profile=profile, on_unused_input=on_unused_input, --> 486 output_keys=output_keys) 487 488 ~/anaconda3/lib/python3.7/site-packages/theano/compile/function_module.py in orig_function(inputs, outputs, mode, accept_inplace, name, profile, on_unused_input, output_keys) 1837 on_unused_input=on_unused_input, 1838 output_keys=output_keys, -> 1839 name=name) 1840 with theano.change_flags(compute_test_value="off"): 1841 fn = m.create(defaults) ~/anaconda3/lib/python3.7/site-packages/theano/compile/function_module.py in __init__(self, inputs, outputs, mode, accept_inplace, function_builder, profile, on_unused_input, fgraph, output_keys, name) 1485 # OUTPUT VARIABLES) 1486 fgraph, additional_outputs = std_fgraph(inputs, outputs, -> 1487 accept_inplace) 1488 fgraph.profile = profile 1489 else: ~/anaconda3/lib/python3.7/site-packages/theano/compile/function_module.py in std_fgraph(input_specs, output_specs, accept_inplace) 179 180 fgraph = gof.fg.FunctionGraph(orig_inputs, orig_outputs, --> 181 update_mapping=update_mapping) 182 183 for node in fgraph.apply_nodes: ~/anaconda3/lib/python3.7/site-packages/theano/gof/fg.py in __init__(self, inputs, outputs, features, clone, update_mapping) 173 174 for output in outputs: --> 175 self.__import_r__(output, reason="init") 176 for i, output in enumerate(outputs): 177 output.clients.append(('output', i)) ~/anaconda3/lib/python3.7/site-packages/theano/gof/fg.py in __import_r__(self, variable, reason) 344 # Imports the owners of the variables 345 if variable.owner and variable.owner not in self.apply_nodes: --> 346 self.__import__(variable.owner, reason=reason) 347 elif (variable.owner is None and 348 not isinstance(variable, graph.Constant) and ~/anaconda3/lib/python3.7/site-packages/theano/gof/fg.py in __import__(self, apply_node, check, reason) 389 "for more information on this error." 390 % (node.inputs.index(r), str(node))) --> 391 raise MissingInputError(error_msg, variable=r) 392 393 for node in new_nodes: MissingInputError: Input 0 of the graph (indices start from 0), used to compute InplaceDimShuffle{x,x}(keras_learning_phase), was not provided and not given a value. Use the Theano flag exception_verbosity='high', for more information on this error. Backtrace when that variable is created: File "", line 219, in _call_with_frames_removed File "/home/jayne/anaconda3/lib/python3.7/site-packages/keras/backend/__init__.py", line 61, in from .theano_backend import * File "", line 983, in _find_and_load File "", line 967, in _find_and_load_unlocked File "", line 677, in _load_unlocked File "", line 728, in exec_module File "", line 219, in _call_with_frames_removed File "/home/jayne/anaconda3/lib/python3.7/site-packages/keras/backend/theano_backend.py", line 23, in _LEARNING_PHASE = T.scalar(dtype='uint8', name='keras_learning_phase') # 0 = test, 1 = train
kweweli commented 5 years ago

I get similar error when I used Model(inputs=..., outputs=...), I usually work around this by explicitly iterating over the layers of the model i want to transfer-learn from. something like this:

# first create a dictionary to hold layer names and indices. This is not necessary
#but I like to have two ways of getting to a layer (using the index or layer name). 
#And the method layers doesn't take # a string or layer name.

model_layer   = {v.name: i for i, v in enumerate(model.layers)}
model_temp = Sequential(name = "trash2")

# counter for getting layer index
count = -1   
# pass the layer name to the dict to get the index     
for i in model.layers[ : model_layer[last_layer] + 1]:
       # add layers to new model using old model layers
       model_temp.add(i)
       count +=1
      # set weights to old model's weights
       model_temp.layers[count].set_weights(i.get_weights())

You could then add additional layers to your new model (model_temp) as usual. This always works without the error.

somz22 commented 4 years ago

I had the same problem, I tried may techniques but converting sequential model to functional worked for me.

Workaround to convert sequential model to functional

from keras import layers, models

input_layer = layers.Input(batch_shape=model.layers[0].input_shape)
prev_layer = input_layer
for layer in model.layers:
    layer._inbound_nodes = []
    prev_layer = layer(prev_layer)

funcModel = models.Model([input_layer], [prev_layer])
Purav-Zumkhawala commented 3 years ago

@itsnamgyu @pcatattacks I see a lot of people saying the workaround is to edit the losses.py but where can I find the losses.py. If it is in the site-packages inside Keras, I cannot find the text "layer_output = self.layer.output" in that losses.py file. Is this because of Keras's version?

I am currently using Keras 2.3.1. If I can solve this issue by degrading my Keras version which version should I install?

Please advise

yuval-harpaz commented 3 years ago

don't you get the path in the error message?

Purav-Zumkhawala commented 3 years ago
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-18-a12f24aa46b4> in <module>
     33 while count<8:
     34     print(count)
---> 35     model = pop(model,count)
     36     count+=1
     37 op.reverse()

<ipython-input-18-a12f24aa46b4> in pop(model, i)
     24         else:
     25             model.layers[-1].outbound_nodes = []
---> 26             model.outputs = [model.layers[-i].output]
     27             op.append(model.layers[-i].output)
     28         model.built = True

~\anaconda3\envs\vision\lib\site-packages\keras\engine\base_layer.py in output(self)
    843         if len(self._inbound_nodes) > 1:
    844             raise AttributeError('Layer ' + self.name +
--> 845                                  ' has multiple inbound nodes, '
    846                                  'hence the notion of "layer output" '
    847                                  'is ill-defined. '

AttributeError: Layer vgg16 has multiple inbound nodes, hence the notion of "layer output" is ill-defined. Use `get_output_at(node_index)` instead.

@yuval-harpez This is my error log

yuval-harpaz commented 3 years ago

I don't have the package now, but try dig in, got to keras\engine and try locate losses.py

Purav-Zumkhawala commented 3 years ago

There is no losses.py that contains that text in my whole system. I think that code has changed and that workaround does not work for the latest version.

yuval-harpaz commented 3 years ago

dig in further, surch inside your files for self.layer.get_output_at, don't give up. Every bug has a fix. On Linux I use grep -lr, I don't know the equivalent for you

fengcong1992 commented 2 years ago

There is no losses.py that contains that text in my whole system. I think that code has changed and that workaround does not work for the latest version.

I have the same issue with you. losses.py and any other files in Keras2.3.1 does not contain this line: "layer_output = self.layer.output". Any idea?