microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.63k stars 2.92k forks source link

[Mobile] #22439

Open sushuChina opened 3 weeks ago

sushuChina commented 3 weeks ago

Describe the issue

I encountered the following error when loading a model with dynamic shapes using the QNN Provider as the backend acceleration setting.

Image

And the running speed is consistent with the situation when using the CPU provider.

However, after fixing the input shapes, the model can be loaded normally. Is it that the current QNN does not support dynamic input shapes?

To reproduce

After I fixed the inputs, the model loaded normally.

Urgency

No response

Platform

Android

OS Version

11/14

ONNX Runtime Installation

Built from Source

Compiler Version (if 'Built from Source')

Built from Source

Package Name (if 'Released Package')

None

ONNX Runtime Version or Commit ID

Qnn 2.26.0.240828

ONNX Runtime API

C++/C

Architecture

ARM64

Execution Provider

SNPE

Execution Provider Library Version

No response

HectorSVC commented 3 weeks ago

That's right, QNN NPU doesn't support dynamic shape.

sushuChina commented 3 weeks ago

In the QNN backend, the latest documentation seems to indicate that there are some operators that support dynamic inputs. Are there plans to support dynamic inputs in ONNX Runtime in the future?