Hi, I'd like to be able to run some tflite models on the smartphones with Qualcomm chip. I'm trying to get close to 1ms inference speed of mediaPipe pose estimation, and I'm getting something close to It by mimicking the approach here:
It uses Java and libraries imported from Maven. It doesn't work for my case, because I'm going to run the processing on videos, decoding and processing them directly OpenCV in my c++ code. I want the workflow to be as efficient as possible.
What can I start from in order to add the support for QNN delegate?
Hi, I'd like to be able to run some tflite models on the smartphones with Qualcomm chip. I'm trying to get close to 1ms inference speed of mediaPipe pose estimation, and I'm getting something close to It by mimicking the approach here:
https://github.com/quic/ai-hub-apps/tree/main/apps/android/SemanticSegmentation
It uses Java and libraries imported from Maven. It doesn't work for my case, because I'm going to run the processing on videos, decoding and processing them directly OpenCV in my c++ code. I want the workflow to be as efficient as possible.
What can I start from in order to add the support for QNN delegate?
Thanks