microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.77k stars 2.94k forks source link

[Mobile] #22439

Open sushuChina opened 1 month ago

sushuChina commented 1 month ago

Describe the issue

I encountered the following error when loading a model with dynamic shapes using the QNN Provider as the backend acceleration setting.

Image

And the running speed is consistent with the situation when using the CPU provider.

However, after fixing the input shapes, the model can be loaded normally. Is it that the current QNN does not support dynamic input shapes?

To reproduce

After I fixed the inputs, the model loaded normally.

Urgency

No response

Platform

Android

OS Version

11/14

ONNX Runtime Installation

Built from Source

Compiler Version (if 'Built from Source')

Built from Source

Package Name (if 'Released Package')

None

ONNX Runtime Version or Commit ID

Qnn 2.26.0.240828

ONNX Runtime API

C++/C

Architecture

ARM64

Execution Provider

SNPE

Execution Provider Library Version

No response

HectorSVC commented 1 month ago

That's right, QNN NPU doesn't support dynamic shape.

sushuChina commented 1 month ago

In the QNN backend, the latest documentation seems to indicate that there are some operators that support dynamic inputs. Are there plans to support dynamic inputs in ONNX Runtime in the future?

github-actions[bot] commented 6 days ago

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.