Closed jizhu1023 closed 3 years ago
Hi @ivan-grishchenko , any comment?
Hi @jizhu1023, Please find the github code for any references. @chuoling Do we have any such, we can share?
There are two ways to convert a Float16 or INT8 quantization model back to Float32.
However, the problem is that even if both methods can generate saved_model and onnx, they cannot bring them back to a state where they can be retrained. It is just a model format conversion.
I'm a little concerned that this work might draw the ire of all the great developers at MediaPipe. :crying_cat_face:
Hi @jiuqiant @chuoling ,
Mediapipe is really efficient and powerful for developing Mobile AI applications. Many thanks for your great work! We recently tested the pose detection and landmark models released just a few days ago which performed very well for our application. But the problem is that the models are quantized so it is not flexible to refine the network structure and retrain the model for our application. Could you send me the original tflite model before quantization for pose detection and landmark? Thanks in advance : ) My email: jizhu1023@gmail.com