Closed cmfcm closed 3 years ago
Hi @cmfcm, Could you please test the same with the latest release and let us know if you still see any less inference speed as compared to previous. We are improving our solutions continuously in all platforms to give the community better products.
Greetings! Firstly, thanks for this wonderful project! I am currently using Iris models, and I would like to ask whether it's possible to do the batch inference(e.g. batch size=2) for the iris tflite model? Since I found the inference speed is relatively slow compared with the facemesh model, I want to improve the inference speed when using them together.
I found there is an option
options.compile_options.dynamic_batch_enabled
ininference_calculator.cc
(https://github.com/google/mediapipe/blob/f96eadd6df64a5f9a31918d6319e51847497641a/mediapipe/calculators/tensor/inference_calculator.cc), so does the iris model support dynamic_batch?Thanks very much!