google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://ai.google.dev/edge/mediapipe
Apache License 2.0
27.79k stars 5.18k forks source link

Batch inference for Iris Model #1347

Closed cmfcm closed 3 years ago

cmfcm commented 4 years ago

Greetings! Firstly, thanks for this wonderful project! I am currently using Iris models, and I would like to ask whether it's possible to do the batch inference(e.g. batch size=2) for the iris tflite model? Since I found the inference speed is relatively slow compared with the facemesh model, I want to improve the inference speed when using them together.
I found there is an option options.compile_options.dynamic_batch_enabled in inference_calculator.cc (https://github.com/google/mediapipe/blob/f96eadd6df64a5f9a31918d6319e51847497641a/mediapipe/calculators/tensor/inference_calculator.cc), so does the iris model support dynamic_batch?
Thanks very much!

sgowroji commented 3 years ago

Hi @cmfcm, Could you please test the same with the latest release and let us know if you still see any less inference speed as compared to previous. We are improving our solutions continuously in all platforms to give the community better products.

google-ml-butler[bot] commented 3 years ago

Are you satisfied with the resolution of your issue? Yes No