cassiebreviu / StableDiffusion

Inference Stable Diffusion with C# and ONNX Runtime
MIT License
218 stars 49 forks source link

Microsoft Olive Optimized Model throws exception MultiHeadAttention on Inference using DirectML FP16 branch #24

Closed AshD closed 10 months ago

AshD commented 11 months ago

I am using the DirectML FP16 branch on RTX 3070 and a Microsoft Olive Optimized Base SD 1.5 Model that I downloaded from here. https://huggingface.co/axodoxian/stable_diffusion_onnx

Getting an error on var output = unetSession.Run(input); in StableDiffusion.ML.OnnxRuntime.dll!StableDiffusion.ML.OnnxRuntime.UNet.Inference(string prompt, StableDiffusion.ML.OnnxRuntime.StableDiffusionConfig config) Line 110 C#

Microsoft.ML.OnnxRuntime.OnnxRuntimeException: '[ErrorCode:RuntimeException] Non-zero status code returned while running MultiHeadAttention node. Name:'MultiHeadAttention_0' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(2448)\onnxruntime.DLL!00007FFCD3CC5645: (caller: 00007FFCD3CC4CDA) Exception(3) tid(7d60) 80070057 The parameter is incorrect.

The input to Run is all FP16 tensors. Any suggestions?

LeonNerd commented 10 months ago

hi,i have a same error,

cassiebreviu commented 10 months ago

Check out this sample code for olive optimized fp16 directml implementation: https://github.com/onnxruntime/StableDiffusion-v1.5-Onnx-Demo/tree/main