Open ghost opened 5 years ago
I have narrowed this down to being an issue with keras.utils.multi_gpu_model
. When this is set to True
, the error appears.
@mxdilln : https://github.com/onnx/keras-onnx/issues is a better place to open issues related to Keras to Onnx model conversion.
Do you have CuDNN layer? We don't support CuDNN.
When using CuDNNLSTM()
on its own, it will convert to onnx, but when wrapped inside a bidirectional()
, it will not convert. When it fails to convert the error message provided says that the layer is not supported within bidirectional()
.
The error I raised this issue for appears when keras.utils.multi_gpu_model
is set to True
, regardless of whether I am using CuDNN layers or a bidirectional layer
Currently bidirectional only supports LSTM, so other layers are not supported.
I get the following ValueError when converting a keras model (trained on a GPU with CuDNN layers) to onnx:
ValueError: Node 'sequential_1/dropout_1/cond/mul/y': Unknown input node '^sequential_1/dropout_1/cond/switch_t'
With the exact same architecture, data and environment (trained on a CPU with only CPU enabled layers) I don't get the error and the model successfully converts.
This is the model:
Are there any known compatibility issues with onnx and GPU-trained models?