Closed aurelienpre closed 11 months ago
Same bug on snapdrgon 865
@aurelienpre any solution? I am facing the same issue.
Yeah, I worked around the issue by using bilinear resize mode rather then nearest, works on my end
@aurelienpre thanks! It works for me as well
@HectorSVC Should the QNN EP be checking for the mode? Possibly it doesn't support nearest
.
I tried that again with ONNX Runtime 1.16.0
built with QNN 2.13.4
and I didn't run into the same error, closing the issue.
I'm running into this now with both onnxruntime 1.18 and ort-nightly, when quantizing and running a YoloV8 model converted into ONNX.
Describe the issue
Hi, I am getting the following exception when creating an
Ort::Session
with the QNN Provider (DSP backend), and an ONNX model that makes use of Resize:This runs in the native (C++) part of an Android app, on a Samsung Tab S8 (Android 13) with a
Snapdragon 8 Gen 1
(Hexagon architecture v69). QNN is supported for backend DSP on this device according to theqnn-platform-validator
tool from the QNN SDK. The error was also reproduced on various other Android devices with different Snapdragon chips and Hexagon architectures. Other models not using the Resize operator work fine.ONNX Runtime 1.15.0 was cross built on Linux for Android using QNN SDK v2.10.0.
ONNX Model to reproduce: dummy_net.zip
To reproduce
Python Create a ONNX model with the resize operator, for instance
See full script to generate the model: dummy_net_onnx.py.txt
C++
ONNX Model to reproduce: dummy_net.zip
Urgency
Medium urgency, currently blocking QNN DSP backend execution with ONNX models using the Resize operator
Platform
Android
OS Version
13
ONNX Runtime Installation
Built from Source
Compiler Version (if 'Built from Source')
clang 9
Package Name (if 'Released Package')
None
ONNX Runtime Version or Commit ID
1.15.0
ONNX Runtime API
C++/C
Architecture
ARM64
Execution Provider
Other / Unknown
Execution Provider Library Version
QNN SDK v2.10.0