Closed xDuck closed 3 years ago
I also tried keras2onnx but couldn't even get the model into ONNX format.
import numpy as np
import keras2onnx
import onnxruntime
import tensorflow as tf
from tensorflow_tts.inference import AutoConfig
from tensorflow_tts.inference import TFAutoModel
from tensorflow_tts.inference import AutoProcessor
processor = AutoProcessor.from_pretrained(
pretrained_path="tensorflow_tts/processor/pretrained/ljspeech_mapper.json"
)
input_text = "hello world."
input_ids = processor.text_to_sequence(input_text)
config = AutoConfig.from_pretrained("examples/fastspeech2/conf/fastspeech2.v1.yaml")
fastspeech2 = TFAutoModel.from_pretrained(
config=config,
pretrained_path="models/model-150000.h5",
is_build=True,
name="fastspeech2"
)
# fastspeech2.load_weights("models/model-150000.h5")
mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
f0_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
energy_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
)
# convert to onnx model
onnx_model = keras2onnx.convert_keras(fastspeech2, fastspeech2.name, target_opset=11)
temp_model_file = 'keras_model.onnx'
keras2onnx.save_model(onnx_model, temp_model_file)
Errors:
...
021-02-18 16:05:14.346891: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:818] function_optimizer: Graph size after: 2662 nodes (181), 3168 edges (282), time = 29.482ms.
2021-02-18 16:05:14.346910: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:818] function_optimizer: function_optimizer did nothing. time = 1.485ms.
WARN: No corresponding ONNX op matches the tf.op node fastspeech2/length_regulator/while/exit/_36 of type Exit
The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node fastspeech2/length_regulator/while/fastspeech2/length_regulator/zeros_1_switch/_26 of type Switch
The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node fastspeech2/length_regulator/while/merge/_16 of type Merge
The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node fastspeech2/length_regulator/while/LoopCond/_20 of type LoopCond
The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node fastspeech2/length_regulator/while/enter/_7 of type Enter
The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node fastspeech2/length_regulator/while/next_iteration/_46 of type NextIteration
The generated ONNX model needs run with the custom op supports.
WARN: No corresponding ONNX op matches the tf.op node fastspeech2/length_regulator/while/body/_1/fastspeech2/length_regulator/while/Repeat/BroadcastTo of type BroadcastTo
The generated ONNX model needs run with the custom op supports.
Traceback (most recent call last):
File "fastspeech2_to_keras.py", line 38, in <module>
onnx_model = keras2onnx.convert_keras(fastspeech2, fastspeech2.name, target_opset=11)
File "/root/git/TensorFlowTTS/env/lib64/python3.6/site-packages/keras2onnx/main.py", line 83, in convert_keras
return convert_topology(topology, name, doc_string, target_opset, channel_first_inputs)
File "/root/git/TensorFlowTTS/env/lib64/python3.6/site-packages/keras2onnx/topology.py", line 322, in convert_topology
cvt(scope, operator, container)
File "/root/git/TensorFlowTTS/env/lib64/python3.6/site-packages/keras2onnx/_builtin.py", line 690, in convert_tf_expand_dims
rank = len(_cal_tensor_shape(node.inputs[0]))
File "/root/git/TensorFlowTTS/env/lib64/python3.6/site-packages/keras2onnx/_tf_utils.py", line 67, in cal_tensor_shape
if len(tensor.shape) > 0 and hasattr(tensor.shape[0], 'value'):
File "/root/git/TensorFlowTTS/env/lib64/python3.6/site-packages/tensorflow/python/framework/tensor_shape.py", line 846, in __len__
raise ValueError("Cannot take the length of shape with unknown rank.")
ValueError: Cannot take the length of shape with unknown rank.
Building the model with the TFLite param enable_tflite_convertible=True
seems to have done the trick, sorry for confusion
I am trying to convert FastSpeech2 to ONNX with
tf2onnx
and when I run the model I get an error with an unsqueeze layer - Does anyone have insight on this?Convert FastSpeech2 Keras -> Tensorflow
Tensorflow to ONNX (Tried to explicitly set input shapes, didn't seem to matter. Also varied opset)
Running in onnxruntime
The error I get is:
When inspecting the model in netron, here is that unsqueeze step to show where it is in the model for reference (red highlight, bottom right)