keisen / tf-keras-vis

Neural network visualization toolkit for tf.keras
https://keisen.github.io/tf-keras-vis-docs/
MIT License
311 stars 45 forks source link

Generating more than 4 Images at once #38

Closed schdief06 closed 3 years ago

schdief06 commented 3 years ago

Hi,

I want to generate the activation maximization for all the neurons of my output layer (128). But when generating more than 4 at once, I get the following error:

Traceback (most recent call last):
  File "", line 49, in <module>
    activation = activation_maximization(loss, seed_input=seed_input, callbacks=[Print(interval=50)], steps=512)
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tf_keras_vis/activation_maximization.py", line 114, in __call__
    optimizer.apply_gradients(zip(grads, seed_inputs))
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py", line 426, in apply_gradients
    grads_and_vars = _filter_grads(grads_and_vars)
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py", line 1026, in _filter_grads
    grads_and_vars = tuple(grads_and_vars)
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tf_keras_vis/activation_maximization.py", line 113, in <genexpr>
    grads = (K.l2_normalize(g, axis=tuple(range(len(g))[1:])) for g in grads)
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tensorflow_core/python/keras/backend.py", line 4697, in l2_normalize
    return nn.l2_normalize(x, axis=axis)
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tensorflow_core/python/ops/nn_impl.py", line 615, in l2_normalize
    return l2_normalize_v2(x, axis, epsilon, name)
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tensorflow_core/python/ops/nn_impl.py", line 642, in l2_normalize_v2
    square_sum = math_ops.reduce_sum(math_ops.square(x), axis, keepdims=True)
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
    return target(*args, **kwargs)
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 1595, in reduce_sum
    _ReductionDims(input_tensor, axis))
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 1606, in reduce_sum_with_dims
    gen_math_ops._sum(input_tensor, dims, keepdims, name=name))
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_math_ops.py", line 10163, in _sum
    input, axis, keep_dims=keep_dims, name=name, ctx=_ctx)
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_math_ops.py", line 10197, in _sum_eager_fallback
    ctx=ctx, name=name)
  File "/home/stefan/anaconda3/envs/blender/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py", line 67, in quick_execute
    six.raise_from(core._status_to_exception(e.code, message), None)
  File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Invalid reduction dimension (4 for input with 4 dimension(s) [Op:Sum]

Here is my code:

#load model
model = tf.keras.models.load_model(path)

#count the neurons
neurons = 5

# Find the codelayer and set to linear
def model_modifier(m):
    code = m.get_layer('code')
    code.activation = tf.keras.activations.linear

def loss(output):
    ret = tuple(output[x, x] for x in range(0, neurons)) # (output[0, 0], output[1, 1], ... , output[neurons, neurons])
    return ret

#create isntance
activation_maximization = ActivationMaximization(model, model_modifier, clone=False)

#visualize
seed_input = tf.random.uniform((neurons, 224, 224, 3), 0, 255)
activation = activation_maximization(loss, seed_input=seed_input, callbacks=[Print(interval=50)], steps=512)

For values of neurons <= 4 it seems to work just fine. Thanks for helping

keisen commented 3 years ago

Hi @schdief06 , thank you for reporting the error.

I believe it's a bug that involve the normalize_gradient option of activation_maximization function. Could you please try to add normalize_gradient=False option to calling activation_maximization function.

https://github.com/keisen/tf-keras-vis/blob/b420516ac29ddd9f3f5b964d53a6f3177241d014/tf_keras_vis/activation_maximization.py#L21

[Note!] We are going to remove the normalize_gradient option at next version. So I'm sorry to bother you, but please remove the code (normalize_gradient=False) when updating tf-keras-vis to next version in your environment. The next version will be released in this year.

Regards.

schdief06 commented 3 years ago

Hi @keisen,

when disabling normalize_gradient it is working! Thank you!

keisen commented 3 years ago

For now, I disabled the option in ActivationMaximization and GradCAM in version 0.5.4.

Thanks!