microsoft / MMdnn

MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.
MIT License
5.79k stars 964 forks source link

Handling multiple inputs in keras #923

Closed sRassmann closed 2 months ago

sRassmann commented 3 years ago

Platform (like ubuntu 16.04/win10): Ubuntu 18.04 (Google Colab)

Python version: 3.7.10

Source framework with version (like Tensorflow 1.4.1 with GPU): Keras 2.5 with Tensorflow 2.0 GPU Backend

Pre-trained model path (webpath or webdisk path): relevant model config JSON

Destination framework with version (like CNTK 2.3 with GPU): PyTorch 1.6.0 GPU

I would like to convert an existing (trained) model from keras/tf to pytorch. However, the model uses two inputs (an image and an additional boolean value) and, hence is currently implemented as keras.engine.functional.Functional. Apparently, MMdnn can not handle it:

$ mmconvert -sf keras -iw output/model.h5 -df pytorch -om output/model.pth
$ # and also with
$ mmtoir -f keras -d output/dbam -n data/models/tf/dbam.json
> Traceback (most recent call last):
  File "/usr/local/bin/mmtoir", line 8, in <module>
    sys.exit(_main())
  File "/usr/local/lib/python3.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 197, in _main
    ret = _convert(args)
  File "/usr/local/lib/python3.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 46, in _convert
    parser = Keras2Parser(model)
  File "/usr/local/lib/python3.7/dist-packages/mmdnn/conversion/keras/keras2_parser.py", line 135, in __init__
    self.keras_graph = Keras2Graph(model)
  File "/usr/local/lib/python3.7/dist-packages/mmdnn/conversion/keras/keras2_graph.py", line 37, in __init__
    raise TypeError("Keras layer of type %s is not supported." % type(model))
TypeError: Keras layer of type <class 'keras.engine.functional.Functional'> is not supported.

Is there any alternative way for trying the conversion that I missed? Else is there any workaround to this?

Here is the code to generate the keras model:

img = Input(shape=image_shape)
gender = Input(shape=(1,))
cnn_vec = InceptionV3(input_shape=image_shape, include_top=False, weights=None)(img)
cnn_vec = GlobalAveragePooling2D()(cnn_vec)
cnn_vec = Dropout(0.2)(cnn_vec)
gender_vec = Dense(32, activation="relu")(gender)
features = Concatenate(axis=-1)([cnn_vec, gender_vec])
dense_layer = Dense(1024, activation="relu")(features)
dense_layer = Dropout(0.2)(dense_layer)
dense_layer = Dense(1024, activation="relu")(dense_layer)
dense_layer = Dropout(0.2)(dense_layer)
dense_layer = Dense(512, activation="relu")(dense_layer)
dense_layer = Dropout(0.2)(dense_layer)
dense_layer = Dense(512, activation="relu")(dense_layer)
dense_layer = Dropout(0.2)(dense_layer)
output_layer = Dense(1, activation="linear")(dense_layer)
model = Model(inputs=[img, gender], outputs=output_layer)

adam = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)

model.compile(optimizer=adam, loss="mse", metrics=metrics)

Thanks for your help.