TensorSpeech / TensorFlowTTS

:stuck_out_tongue_closed_eyes: TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, French, Korean, Chinese, German and Easy to adapt for other languages)
https://tensorspeech.github.io/TensorFlowTTS/
Apache License 2.0
3.82k stars 812 forks source link

[question] quantazation option for tacotron2 #346

Closed fly2mun closed 3 years ago

fly2mun commented 3 years ago

The default seems to be 8bit. I want to use it in 16bit, but I am curious if it works well. Is it possible ? Do you have a support plan?

dathudeptrai commented 3 years ago

you mean tflite @fly2mun . if yes you can add this code:

converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]

before converter.convert()

fly2mun commented 3 years ago

When I changed it as follows, there was a problem that it worked well. I used the default tacotron2 h5 and it worked fine before changing the code.

converter = tf.lite.TFLiteConverter.from_concrete_functions( [tacotron2_concrete_function] ) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_types = [tf.float16]

tflite_model = converter.convert()

--> error

During handling of the above exception, another exception occurred:

ConverterError Traceback (most recent call last)

in 8 converter.target_spec.supported_types = [tf.float16] 9 ---> 10 tflite_model = converter.convert() ~/anaconda3/envs/tts/lib/python3.7/site-packages/tensorflow/lite/python/lite.py in convert(self) 1074 Invalid quantization parameters. 1075 """ -> 1076 return super(TFLiteConverterV2, self).convert() 1077 1078 ~/anaconda3/envs/tts/lib/python3.7/site-packages/tensorflow/lite/python/lite.py in convert(self) 898 899 return super(TFLiteFrozenGraphConverterV2, --> 900 self).convert(graph_def, input_tensors, output_tensors) 901 902 ~/anaconda3/envs/tts/lib/python3.7/site-packages/tensorflow/lite/python/lite.py in convert(self, graph_def, input_tensors, output_tensors) 631 input_tensors=input_tensors, 632 output_tensors=output_tensors, --> 633 **converter_kwargs) 634 635 calibrate_and_quantize, flags = quant_mode.quantizer_flags( ~/anaconda3/envs/tts/lib/python3.7/site-packages/tensorflow/lite/python/convert.py in toco_convert_impl(input_data, input_tensors, output_tensors, enable_mlir_converter, *args, **kwargs) 572 input_data.SerializeToString(), 573 debug_info_str=debug_info_str, --> 574 enable_mlir_converter=enable_mlir_converter) 575 return data 576 ~/anaconda3/envs/tts/lib/python3.7/site-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter) 200 return model_str 201 except Exception as e: --> 202 raise ConverterError(str(e)) 203 204 if distutils.spawn.find_executable(_toco_from_proto_bin) is None:
fly2mun commented 3 years ago

please update question. thanks moon.

dathudeptrai commented 3 years ago

please update question. thanks moon.

can you share a colab reproduce a bug ?

fly2mun commented 3 years ago

hello. thanks for your co-operation. This is the log in colab. modify code ---- start !! converter = tf.lite.TFLiteConverter.from_concrete_functions( [tacotron2_concrete_function] ) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()

modify code ---- end !!

If add only the two lines below, there is no error, but the inference result is nothing(mel-spectrogram) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_types = [tf.float16]

During handling of the above exception, another exception occurred:

ConverterError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter) 200 return model_str 201 except Exception as e: --> 202 raise ConverterError(str(e)) 203 204 if distutils.spawn.find_executable(_toco_from_proto_bin) is None:

dathudeptrai commented 3 years ago

@fly2mun i need colab :)))

fly2mun commented 3 years ago

@dathudeptrai

This is the execution of the colab shared on TensorflowTTS site.

2020/07/05 Support Convert Tacotron-2, FastSpeech to Tflite. Pls see the colab. Thank @jaeyoo from the TFlite team for his support.

If you enter and run it, you can reproduce it together.

https://colab.research.google.com/drive/1HudLLpT9CQdh2k04c06bHUwLubhGTWxA?usp=sharing

thanks. Moon.

dathudeptrai commented 3 years ago

@fly2mun okay, i will fix it

dathudeptrai commented 3 years ago

@fly2mun i fix it :)), use tf.2.3.1 :D

fly2mun commented 3 years ago

@dathudeptrai The same result(error) was obtained using the tf 2.3.1 cpu version. Could you please tell me the colab you used?

Thanks. Moon.

dathudeptrai commented 3 years ago

@fly2mun replace tf.float16 -> tf.float32 fixed ur problem.

dathudeptrai commented 3 years ago

FYI: @jaeyoo seem tflite float16 yield nan output for all my models :))). I also check in read android devices.

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.