microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.26k stars 2.87k forks source link

[Performance] Failed to run Whisper inference after optimization with Dml EP #21156

Open XciciciX opened 3 months ago

XciciciX commented 3 months ago

Describe the issue

I exported my medium Whisper model correctly. It could run the inference with the correct answer. After that, I optimized my model. I ran the command line: python -m onnxruntime.transformers.optimizer --input ./whisper-medium-onnx/decoder_with_past_model.onnx --output ./whisper-medium-onnx-test/decoder_with_past_model.onnx --float16 --model_type bart --num_heads 16 --hidden_size 1024 --use_multi_head_attention I exported and optimized all my related models. When I ran the inference again, it could run with CPU EP, but it failed with Dml EP. It failed with the following information: image I have debugged a little and found the problem is the --use_multi_head_attention in one model. If I do not fuse MHA, it can run. If I add the MHA, the error occurs.

To reproduce

Run the command line

Urgency

Yes

Platform

Windows

OS Version

11

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.17.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

DirectML

Execution Provider Library Version

No response

Model File

No response

Is this a quantized model?

Yes

WA225 commented 3 months ago

I am having a similar issue described in https://github.com/microsoft/Olive/issues/1221. Any idea how to fix it?

github-actions[bot] commented 2 months ago

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.