PINTO0309 / tflite2tensorflow

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.
https://qiita.com/PINTO
MIT License
258 stars 38 forks source link

The UNIDIRECTIONAL_SEQUENCE_LSTM layer is not yet implemented. #14

Open peiwenhuang27 opened 3 years ago

peiwenhuang27 commented 3 years ago

OS you are using: MacOS 11.4

Version of TensorFlow: v2.5.0

Environment: Docker

Under tf 2.5.0, I converted my pre-trained model from saved_model to tflite.

Afterwards, in Docker container, when I was converting this tflite model to pb format using tflite2tensorflow, the following error occured:

ERROR: The UNIDIRECTIONAL_SEQUENCE_LSTM layer is not yet implemented.

(In this experiment, I did not perform quantization/optimization, but later on I do plan to use tflite to quantize my model that is to be saved as .tflite, which is why I did not directly convert saved_model to pb)

PINTO0309 commented 3 years ago

In fact, I tried to implement that operation a month ago, but there were not enough samples of the model to create a good conversion program. To the extent possible, can you provide the following resources? The minimum amount of information that you are willing to disclose is fine.

  1. Source code for building the LSTM model.
  2. saved_model
  3. tflite file converted from saved_model

I'm having trouble with TFLite's UNIDIRECTIONAL_SEQUENCE_LSTM because it is very difficult to connect it to TensorFlow's standard operations. Screenshot 2021-07-14 16:56:40

Thank you for your help.

peiwenhuang27 commented 3 years ago

Hi, sorry for the late reply. I have attached a zip file of my models (only initialized, without training) and source code, let me know if there's a problem with it! By the way, I noticed that Quantize layer from tflite is also not yet implemented. Should I also provide some samples for that as well?

Thank you!

resources.zip

PINTO0309 commented 3 years ago

Thank you! I'm very busy with my day job, so I'll examine it carefully when I have time.

By the way, I noticed that Quantize layer from tflite is also not yet implemented. Should I also provide some samples for that as well?

I am aware of this point as well. I do not need to provide resources as I have a large number of samples and I know that I can technically handle it. If you are in a hurry to convert your Quantize layer, you can try the following tool. https://github.com/onnx/tensorflow-onnx

$ python -m tf2onnx.convert \
--opset 11 \
--tflite int8_quantized_tflite_xxxx.tflite \
--output model.onnx \
--dequantize