fchollet / deep-learning-with-python-notebooks

Jupyter notebooks for the code samples of the book "Deep Learning with Python"
MIT License
18.4k stars 8.57k forks source link

Use of K.update_add leads to NoneType in K.gradients #43

Open kechan opened 6 years ago

kechan commented 6 years ago

I tried the code related to Deep Dream and ran into this warning:

WARNING:tensorflow:Variable += will be deprecated. Use variable.assign_add if you want assignment to the variable value or 'x = x + y' if you want a new python Tensor object. In [ ]:

This warning is trigged by this: loss += coeff * K.sum(K.square(activation[:, 2: -2, 2: -2, :])) / scaling

So I tried to use K.update_add() instead. But after that, "grads = K.gradients(loss, dream)[0]" will give NoneType. I googled and seem to hear that a non-differentiable "loss" can result in this. So I am thinking maybe .update_add() somehow causes this. I switch back to use "+=" and K.gradients() is returning the correct thing.

Is this a bug in Keras?

EricLee0000 commented 6 years ago

Hey

I would suggest assigning " coeff K.sum(K.square(activation[:, 2: -2, 2: -2, :])) / scaling" to a variable and then putting into the "loss += variable*".

Best, Eric

On Tue, Mar 13, 2018 at 5:22 AM, kechan notifications@github.com wrote:

I tried the code related to Deep Dream and ran into this warning:

WARNING:tensorflow:Variable += will be deprecated. Use variable.assign_add if you want assignment to the variable value or 'x = x + y' if you want a new python Tensor object. In [ ]:

This warning is trigged by this: loss += coeff * K.sum(K.square(activation[:, 2: -2, 2: -2, :])) / scaling

So I tried to use K.update_add() instead. But after that, "grads = K.gradients(loss, dream)[0]" will give NoneType. I googled and seem to hear that a non-differentiable "loss" can result in this. So I am thinking maybe .update_add() somehow causes this. I switch back to use "+=" and K.gradients() is returning the correct thing.

Is this a bug in Keras?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/fchollet/deep-learning-with-python-notebooks/issues/43, or mute the thread https://github.com/notifications/unsubscribe-auth/Af0qTiYIOG9CVla7O91ctFDuwX_ZjkwAks5tdvUegaJpZM4SnyRy .

-- Eric Lee University of Maryland, College Park 2016 Robert H. Smith School of Business Finance 0961 482 064 lee.eric.umd@gmail.com Lee.Eric.UM@gmail.com https://www.linkedin.com/in/eryklee/

kechan commented 6 years ago

@EricLee0000

This is great for code readability and maintenance, but does this solve the real issue? This code was given as is from Chollet's book.

jonvanw commented 4 years ago

The += operator has seen been deprecated, so using the code as is now produces an error rather than just a warning. It works if you use the x = x + y form as suggested in the error/warning message, specifically: loss = loss + coeff * K.sum(K.square(activation[:, 2: -2, 2: -2, :])) / scaling Note that there is a subtle semantic distinction between x += yandx = x + y, namely that the former should update the existing object assigned to x, whereas the latter should create a new object and assign it to x.