Closed yuguolong closed 5 days ago
Whisper model GPU support: Is the Whisper model capable of running on the QCS6490 GPU?
Yes, you just need to select GPU when setting up TF Lite (the GPUv2 delegate). ONNX does not have a supported GPU path on the 6490: https://github.com/quic/ai-hub-apps/blob/ab1b5a7673803f3a6c99cc7a54f5c0c03af41624/apps/android/tflite_helpers/TFLiteHelpers.java#L49
LLM GPU support: Is it possible to run large language models on the QCS6490 GPU?
You'd likely run out of memory. We don't support LLMs on devices earlier than the 8 Gen 3 & 8 Elite.
Future Android package: Are there any plans to provide official Android packages for easier integration and deployment?
We offer QNN and the TFLite QNN delegate via Maven: https://github.com/quic/ai-hub-apps/blob/main/apps/android/ImageClassification/build.gradle#L58
We do not plan on distributing models themselves via an Android package, though.
@kory thank you for your reply.
This is a fantastic project, thank you for your hard work! I'm working with a Qualcomm Snapdragon QCS6490 chip running Android and have a few questions regarding its compatibility:
Thank you for your time and assistance.