Open mbenencase opened 3 months ago
Qualcomm has new AI Engine Direct SDK
to run models on DSP/HTP via QNNTFLiteDelegate
Please following steps below to setup and run tflite models on QCS6490
AI Engine Direct SDKs
from QPM https://qpm.qualcomm.com/#/main/tools/details/qualcomm_ai_engine_direct
<QNN_SDK>/libs/<target device>
on device. Let's call this libs_path
<QNN_SDK>/libs/hexagon-v<VERSION>/
on device. Let's call this skel_libs_path
export LD_LIBRARY_PATH=<libs_path from point 2>
export ADSL_LIBRARY_PATH=<skel_libs_path from point 3>
backend_type
during load_delegate as options<libs_path>/libQnnTFLiteDelegate.so
to load_delegate
tf.lite.experimental.load_delegate(<libs_path> + "libQnnTFLiteDelegate.so", options={"backend_type":"htp"})
Attaching Sample python script and QNN 2.20 libs to try on RB3 Gen2 aarch64-ubuntu-gcc9.4.zip hexagon-v68.zip Model-and-scripts.zip
we will update our docs with these instructions soon to make it easy to deploy on IoT platforms
Hi all, I have an AI-BOX with Ubuntu 20.04 from a Qualcomm OEM/ODM with the QCS6490 chipset.
I used the AI Hub website to quantize a YoloV7 model to .tflite model and I'd like to perform the model inference on the QCS6490 device mentioned above.
This is the code that I'm using for:
And my question is where can I find, or where do I download the
libhexagon_delegate.so
library.