Open efournie opened 5 years ago
I can reproduce the problem both on windows with CUDA and on linux / CPU.
Hi @efournie, unfortunately custom loss is not supported now due to we can't register this loss with mxnet backend. But you can always add it in the losses.py and import it in your script. refer to this custom loss: https://github.com/awslabs/keras-apache-mxnet/blob/master/keras/losses.py#L77
I have the same problem. I added my custom loss function to losses.py, but still I get the following error message:
ValueError: ?[91mYou created Module with Module(..., fixed_param_names=['loss/activation_1_loss/variable1']) but input with name 'loss/activation_1_loss/variable1' is not found in symbol.list_arguments(). Did you mean one of: /input_11 conv3d_1/kernel1 conv3d_1/bias1 batch_normalization_1/gamma1 batch_normalization_1/beta1 conv3d_2/kernel1 conv3d_2/bias1 batch_normalization_2/gamma1 batch_normalization_2/beta1 conv3d_3/kernel1 conv3d_3/bias1 batch_normalization_3/gamma1 batch_normalization_3/beta1 conv3d_4/kernel1 conv3d_4/bias1 batch_normalization_4/gamma1 batch_normalization_4/beta1 conv3d_5/kernel1 conv3d_5/bias1 batch_normalization_5/gamma1 batch_normalization_5/beta1 conv3d_6/kernel1 conv3d_6/bias1 batch_normalization_6/gamma1 batch_normalization_6/beta1 conv3d_7/kernel1 conv3d_7/bias1 batch_normalization_7/gamma1 batch_normalization_7/beta1 conv3d_8/kernel1 conv3d_8/bias1 batch_normalization_8/gamma1 batch_normalization_8/beta1 conv3d_9/kernel1 conv3d_9/bias1 batch_normalization_9/gamma1 batch_normalization_9/beta1 conv3d_10/kernel1 conv3d_10/bias1 batch_normalization_10/gamma1 batch_normalization_10/beta1 conv3d_11/kernel1 conv3d_11/bias1 batch_normalization_11/gamma1 batch_normalization_11/beta1 conv3d_12/kernel1 conv3d_12/bias1 batch_normalization_12/gamma1 batch_normalization_12/beta1 conv3d_13/kernel1 conv3d_13/bias1 batch_normalization_13/gamma1 batch_normalization_13/beta1 conv3d_14/kernel1 conv3d_14/bias1 batch_normalization_14/gamma1 batch_normalization_14/beta1 conv3d_15/kernel1 conv3d_15/bias1 batch_normalization_15/gamma1 batch_normalization_15/beta1 conv3d_16/kernel1 conv3d_16/bias1 conv3d_17/kernel1 conv3d_17/bias1 batch_normalization_16/gamma1 batch_normalization_16/beta1 conv3d_18/kernel1 conv3d_18/bias1 batch_normalization_17/gamma1 batch_normalization_17/beta1 conv3d_19/kernel1 conv3d_19/bias1 batch_normalization_18/gamma1 batch_normalization_18/beta1 conv3d_20/kernel1 conv3d_20/bias1 conv3d_21/kernel1 conv3d_21/bias1 batch_normalization_19/gamma1 batch_normalization_19/beta1 conv3d_22/kernel1 conv3d_22/bias1 batch_normalization_20/gamma1 batch_normalization_20/beta1 conv3d_23/kernel1 conv3d_23/bias1 batch_normalization_21/gamma1 batch_normalization_21/beta1 conv3d_24/kernel1 conv3d_24/bias1?[0m
@roywei can you please elaborate on your solution here? I tried adding it to the losses.py locally and am still running into the same issue.
Hello,
I am trying to use a mask in a custom loss function but with the mxnet backend, the program fails with the following error:
Switching to the tensorflow backend removes the error and the program can run as expected. The issue can be reproduced with this minimal example:
I also tried to use Multiply layers in the model definition to work around the issue without success. Unfortunately, I can't pinpoint the exact cause of the problem in the Keras backend.