Closed ameygrytfit closed 3 years ago
@ameygrytfit , please check with the newer version-4 which might provide a faster inference.
+1 to @arghyaganguly . There was a bug of TfLite version 3 model which prevents the model to be run on GPU. Could you please try version 4 (https://tfhub.dev/google/lite-model/movenet/singlepose/lightning/tflite/float16/4)? FYI, based on our own benchmark result, the lightning model runs at 25 ms latency on Pixel 5 with GPU.
@yuhuichen1015 @akhorlin @arghyaganguly I tried running the [https://tfhub.dev/google/lite-model/movenet/singlepose/lightning/tflite/float16/4]() model on android as suggested by you guys with this pose estimation example [https://github.com/tensorflow/examples/tree/master/lite/examples/pose_estimation]() and got the following error :java.lang.IllegalArgumentException: ByteBuffer is not a valid flatbuffer model . Can you kindly let me know what I must be missing here?
@ameygrytfit I cannot access your example "pose_estimation" code and so it is difficult for me to understand what's going on with the error. Could you take a look at the tutorial https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/movenet.ipynb and see if you run the model in similar way?
I'd first look at the input tensor type as from v3 to v4, we changed the input tensor type from float32 to uint8. Other than that, the two versions should share pretty similar usage.
@yuhuichen1015 Sorry for the typo in the links. The solution you provided worked fine on android! Thank you!
Closing this based on the above comments.Thanks.
We tried running MoveNet lightning on android [(https://tfhub.dev/google/lite-model/movenet/singlepose/lightning/3)]() and results are not as we expected. We achieved a latency inference of 70 ms on average for Sony Xperia10 plus phone on gpu. The MLKIT lite model gave us around 35ms performance average while running on cpu. Is this performance expected by MoveNet for android?