onnx / keras-onnx

Convert tf.keras/Keras models to ONNX
Apache License 2.0
381 stars 109 forks source link

GPU CuDNN model not converting into onnx format: ValueError: Node 'sequential_1/dropout_1/cond/mul/y': Unknown input node '^sequential_1/dropout_1/cond/switch_t' #181

Open ghost opened 5 years ago

ghost commented 5 years ago

I get the following ValueError when converting a keras model (trained on a GPU with CuDNN layers) to onnx:

ValueError: Node 'sequential_1/dropout_1/cond/mul/y': Unknown input node '^sequential_1/dropout_1/cond/switch_t'

With the exact same architecture, data and environment (trained on a CPU with only CPU enabled layers) I don't get the error and the model successfully converts.

This is the model:

def rnn_1(multi_gpu=True):
    """
    Schema for cnn -> lstm neural network
    :param sequence_length:
    :param embedded_sequences:
    :param n_classes:
    :return:
    """

    model = Sequential()
    model.add(embedding_layer)

    model.add(Bidirectional(LSTM(64, return_sequences=True)))
    model.add(BatchNormalization())

    model.add(Bidirectional(LSTM(64, return_sequences=True)))
    model.add(GlobalMaxPool1D())
    model.add(BatchNormalization())

    model.add(Dense(32, activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(16, activation='relu'))
    model.add(Dense(len(primary_codes), activation='softmax'))
    if multi_gpu:
        model = multi_gpu_model(model, gpus=2)

    model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])
    print(model.summary())
    return model

Are there any known compatibility issues with onnx and GPU-trained models?

ghost commented 5 years ago

I have narrowed this down to being an issue with keras.utils.multi_gpu_model. When this is set to True, the error appears.

askhade commented 5 years ago

@mxdilln : https://github.com/onnx/keras-onnx/issues is a better place to open issues related to Keras to Onnx model conversion.

jiafatom commented 5 years ago

Do you have CuDNN layer? We don't support CuDNN.

ghost commented 5 years ago

When using CuDNNLSTM() on its own, it will convert to onnx, but when wrapped inside a bidirectional(), it will not convert. When it fails to convert the error message provided says that the layer is not supported within bidirectional().

The error I raised this issue for appears when keras.utils.multi_gpu_model is set to True, regardless of whether I am using CuDNN layers or a bidirectional layer

jiafatom commented 5 years ago

Currently bidirectional only supports LSTM, so other layers are not supported.