Open peiwenhuang27 opened 3 years ago
In fact, I tried to implement that operation a month ago, but there were not enough samples of the model to create a good conversion program. To the extent possible, can you provide the following resources? The minimum amount of information that you are willing to disclose is fine.
I'm having trouble with TFLite's UNIDIRECTIONAL_SEQUENCE_LSTM
because it is very difficult to connect it to TensorFlow's standard operations.
Thank you for your help.
Hi, sorry for the late reply. I have attached a zip file of my models (only initialized, without training) and source code, let me know if there's a problem with it! By the way, I noticed that Quantize layer from tflite is also not yet implemented. Should I also provide some samples for that as well?
Thank you!
Thank you! I'm very busy with my day job, so I'll examine it carefully when I have time.
By the way, I noticed that Quantize layer from tflite is also not yet implemented. Should I also provide some samples for that as well?
I am aware of this point as well. I do not need to provide resources as I have a large number of samples and I know that I can technically handle it. If you are in a hurry to convert your Quantize layer, you can try the following tool. https://github.com/onnx/tensorflow-onnx
$ python -m tf2onnx.convert \
--opset 11 \
--tflite int8_quantized_tflite_xxxx.tflite \
--output model.onnx \
--dequantize
OS you are using: MacOS 11.4
Version of TensorFlow: v2.5.0
Environment: Docker
Under tf 2.5.0, I converted my pre-trained model from
saved_model
totflite
.Afterwards, in Docker container, when I was converting this
tflite
model topb
format using tflite2tensorflow, the following error occured:(In this experiment, I did not perform quantization/optimization, but later on I do plan to use tflite to quantize my model that is to be saved as
.tflite
, which is why I did not directly convertsaved_model
topb
)