keithito / tacotron

A TensorFlow implementation of Google's Tacotron speech synthesis with pre-trained model (unofficial)
MIT License
2.94k stars 965 forks source link

How to export the model with saved_model ? #291

Open xiaoyangnihao opened 5 years ago

xiaoyangnihao commented 5 years ago

I want to deploy this model in the tensorflow serving, but saved-model is needed. It seems complex and difficult for me to figure the raw inputs and final outputs of the model, I think in the tensorflow serving module, the input should be the text, and the out is wave, is this right ? I really appreciate it if anyone can help me figure this out and export the model with saved_model.

xiaoyangnihao commented 4 years ago

The following is my saved_model code, it can export saved_model, But when I server it with tensorflow serving rest api, errors happen, I'm not sure right or not, I haven't deploy it successfully yet. Does anyone can help, thanks a lot. ` import os import numpy as np import tensorflow as tf from hparams import hparams from models import create_model from util import audio model_name='tacotron' model_path = "model" model_version = 100 base_path = '/home/XXX/werther2/' checkpoint_path = '/home/XXX/werther2/logs-tacotron/model.ckpt-2000' with tf.get_default_graph().as_default():

定义你的输入输出以及计算图

sequence = tf.placeholder(tf.int32, [None, None], 'sequence') sequence_len = tf.placeholder(tf.int32, [None], 'sequence_len') wave = tf.placeholder(tf.float32, [None], 'wave') with tf.variable_scope('model') as scope: model = create_model(model_name, hparams) model.initialize(sequence, sequence_len)

wave = audio.inv_spectrogram_tensorflow(model.linear_outputs[0])

saver = tf.train.Saver()

导入你已经训练好的模型.ckpt文件

print('Loading checkpoint: %s' % checkpoint_path) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) saver.restore(sess, checkpoint_path)

# 定义导出模型的各项参数
export_path = os.path.join(base_path, model_path, str(model_version))
print('Exporting trained model to', export_path)
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
builder = tf.saved_model.builder.SavedModelBuilder(export_path)

# 定义Input tensor info和Output tensor info
tensor_info_input = tf.saved_model.utils.build_tensor_info(sequence)
tensor_info_input_len = tf.saved_model.utils.build_tensor_info(sequence_len)
tensor_info_output = tf.saved_model.utils.build_tensor_info(wave)

# 创建预测签名
prediction_signature = (
    tf.saved_model.signature_def_utils.build_signature_def(
        inputs={'sequence': tensor_info_input,
                'sequence_len': tensor_info_input_len},
        outputs={'wav_out': tensor_info_output},
        method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))

builder.add_meta_graph_and_variables(
    sess,
    [tf.saved_model.tag_constants.SERVING],
    signature_def_map={
        'predict': prediction_signature},
    legacy_init_op=legacy_init_op)

# 导出模型
builder.save(as_text=True)
print('Done exporting!')`

`

gxmdavid commented 4 years ago

Did you have any success with this? I am attempting to do the same right now and encountering similar issues.

berkaycinci commented 3 years ago

Did you find a solution? I am trying to do the same thing.