Closed fly2mun closed 3 years ago
you mean tflite @fly2mun . if yes you can add this code:
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
before converter.convert()
When I changed it as follows, there was a problem that it worked well. I used the default tacotron2 h5 and it worked fine before changing the code.
converter = tf.lite.TFLiteConverter.from_concrete_functions( [tacotron2_concrete_function] ) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
--> error
During handling of the above exception, another exception occurred:
ConverterError Traceback (most recent call last)
please update question. thanks moon.
please update question. thanks moon.
can you share a colab reproduce a bug ?
hello.
thanks for your co-operation.
This is the log in colab.
modify code ---- start !!
converter = tf.lite.TFLiteConverter.from_concrete_functions(
[tacotron2_concrete_function]
)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
modify code ---- end !!
If add only the two lines below, there is no error, but the inference result is nothing(mel-spectrogram) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_types = [tf.float16]
During handling of the above exception, another exception occurred:
ConverterError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter) 200 return model_str 201 except Exception as e: --> 202 raise ConverterError(str(e)) 203 204 if distutils.spawn.find_executable(_toco_from_proto_bin) is None:
@fly2mun i need colab :)))
@dathudeptrai
This is the execution of the colab shared on TensorflowTTS site.
2020/07/05 Support Convert Tacotron-2, FastSpeech to Tflite. Pls see the colab. Thank @jaeyoo from the TFlite team for his support.
If you enter and run it, you can reproduce it together.
https://colab.research.google.com/drive/1HudLLpT9CQdh2k04c06bHUwLubhGTWxA?usp=sharing
thanks. Moon.
@fly2mun okay, i will fix it
@fly2mun i fix it :)), use tf.2.3.1 :D
@dathudeptrai The same result(error) was obtained using the tf 2.3.1 cpu version. Could you please tell me the colab you used?
Thanks. Moon.
@fly2mun replace tf.float16 -> tf.float32 fixed ur problem.
FYI: @jaeyoo seem tflite float16 yield nan output for all my models :))). I also check in read android devices.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
The default seems to be 8bit. I want to use it in 16bit, but I am curious if it works well. Is it possible ? Do you have a support plan?