raghakot / keras-vis

Neural network visualization toolkit for keras
https://raghakot.github.io/keras-vis
MIT License
2.97k stars 664 forks source link

backprop modifiers fail silently for advanced activations #177

Open eyaler opened 5 years ago

eyaler commented 5 years ago

i am using the master. the code in tensorflow_backend now says:

ADD on 22 Jul 2018:

In fact, it has broken. Currently, advanced activations are not supported.

for my case with leaky relus i tried to change:

with tf.get_default_graph().gradient_override_map({'Relu': backprop_modifier, 'LeakyRelu':backprop_modifier}):

but this did not do anything. as a temporary fix i am modifying my network to relus before calling visualize_saliency():

def replace_intermediate_layer_in_keras(model):
    layers = [l for l in model.layers]

    x = layers[0].output
    for i in range(1, len(layers)):
        if layers[i].name.startswith('leaky'):
            x = Activation('relu')(x)
        else:
            x = layers[i](x)

    return Model(inputs=layers[0].input, outputs=x)

based on this: https://stackoverflow.com/a/49492256/664456

@keisen @raghakot i am not sure if my fix is a good solution, or am I losing something by doing the modification in advance.

thibtld commented 5 years ago

Hi, I don't think your solution is a good idea, I tested it and I had terrible performances. Which version of Tensorflow do you have ?

I had this issue in 1.12.0 because of the op LeakyGrad which seems on this version a bit different of ReluGrad. (take a look in the ops in the graph)

gradients/leaky_re_lu_6/LeakyRelu_grad/Shape
gradients/leaky_re_lu_6/LeakyRelu_grad/Shape_1
gradients/leaky_re_lu_6/LeakyRelu_grad/Shape_2
gradients/leaky_re_lu_6/LeakyRelu_grad/zeros/Const
gradients/leaky_re_lu_6/LeakyRelu_grad/zeros
gradients/leaky_re_lu_6/LeakyRelu_grad/GreaterEqual
gradients/leaky_re_lu_6/LeakyRelu_grad/BroadcastGradientArgs
gradients/leaky_re_lu_6/LeakyRelu_grad/Select
gradients/leaky_re_lu_6/LeakyRelu_grad/Select_1
gradients/leaky_re_lu_6/LeakyRelu_grad/Sum
gradients/leaky_re_lu_6/LeakyRelu_grad/Reshape
gradients/leaky_re_lu_6/LeakyRelu_grad/Sum_1
gradients/leaky_re_lu_6/LeakyRelu_grad/Reshape_1
gradients/leaky_re_lu_6/LeakyRelu/mul_grad/Shape
gradients/leaky_re_lu_6/LeakyRelu/mul_grad/Shape_1
gradients/leaky_re_lu_6/LeakyRelu/mul_grad/BroadcastGradientArgs
gradients/leaky_re_lu_6/LeakyRelu/mul_grad/Mul
gradients/leaky_re_lu_6/LeakyRelu/mul_grad/Sum
gradients/leaky_re_lu_6/LeakyRelu/mul_grad/Reshape
gradients/leaky_re_lu_6/LeakyRelu/mul_grad/Mul_1
gradients/leaky_re_lu_6/LeakyRelu/mul_grad/Sum_1
gradients/leaky_re_lu_6/LeakyRelu/mul_grad/Reshape_1

instead of a simple ReluGrad op

The 1.13 solved it, I have a good saliency maps (and for sure the 1.14 should too).

I can't show you the graph for now but the op LeakyReluGrad exits now.

eyaler commented 3 years ago

@thibtld thanks. i can confirm gradient_override_map now works for me with tf 1.15