google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://ai.google.dev/edge/mediapipe
Apache License 2.0
27.83k stars 5.18k forks source link

Could you provide the original tflite model before quantization? #2016

Closed jizhu1023 closed 3 years ago

jizhu1023 commented 3 years ago

Hi @jiuqiant @chuoling ,

Mediapipe is really efficient and powerful for developing Mobile AI applications. Many thanks for your great work! We recently tested the pose detection and landmark models released just a few days ago which performed very well for our application. But the problem is that the models are quantized so it is not flexible to refine the network structure and retrain the model for our application. Could you send me the original tflite model before quantization for pose detection and landmark? Thanks in advance : ) My email: jizhu1023@gmail.com

jizhu1023 commented 3 years ago

Hi @ivan-grishchenko , any comment?

sgowroji commented 3 years ago

Hi @jizhu1023, Please find the github code for any references. @chuoling Do we have any such, we can share?

PINTO0309 commented 3 years ago

There are two ways to convert a Float16 or INT8 quantization model back to Float32.

  1. tensorflow-onnx https://github.com/onnx/tensorflow-onnx Support: Float16 / INT8, tflite->ONNX
  2. tflite2tensorflow https://github.com/PINTO0309/tflite2tensorflow Support: Float16, tflite->ONNX,tflite float32/16/INT8, EdgeTPU, TFJS, TFTRT, OpenVINO, Myriad Blob, CoreML

However, the problem is that even if both methods can generate saved_model and onnx, they cannot bring them back to a state where they can be retrained. It is just a model format conversion.

I'm a little concerned that this work might draw the ire of all the great developers at MediaPipe. :crying_cat_face:

google-ml-butler[bot] commented 3 years ago

Are you satisfied with the resolution of your issue? Yes No