microsoft / DirectML

DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
MIT License
2.17k stars 288 forks source link

Replace model in DirectMLNpuInference sample: The specified device interface or feature level is not supported on this system #611

Closed WTian-Yu closed 1 month ago

WTian-Yu commented 1 month ago

I could run successfully the old version of DirectMLNpuInference project (da3abe62d6e22084d24a291389011991af3444b8)

But, once I just replace a simple model mobilenetv2-7-fp16.onnx to mine, I failed on creatting ort session and got this error log: [E:onnxruntime:, inference_session.cc:1935 onnxruntime::InferenceSession::Initialize::<lambda_e09ac8a7667b49d27dc77a8205e26aa5>::operator ()] Exception during initialization: C:\__w\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\DmlGraphFusionHelper.cpp(451)\onnxruntime.dll!00007FFD7B82B069: (caller: 00007FFD7B7AEA84) Exception(1) tid(5610) 887A0004 The specified device interface or feature level is not supported on this system.

Here is the model and my device information:

Processor Intel(R) CoreTM Ultra 7 155U 2.10 GHz Installed RAM 16.0 GB (15.6 GB usable) System type 64-bit operating system, x64-based processor Edition Windows 11 Pro

Version 23H2 Installed on ‎12/‎5/‎2023 OS build 22631.3880 Experience Windows Feature Experience Pack 1000.22700.1020.0

Screenshot 2024-07-17 173018 Screenshot 2024-07-17 173008

I've also checked new version of DirectMLNpuInference project (72ad224f0b8dc2d80a954c4fe1eb141a3fd44b11), I updated the DirectML to 15.0 but not change the code in 46f3e141a0937ca4d17c1af9ed19b0c210c566e1 since I didn't update windows OS(can't find DXCORE_ADAPTER_ATTRIBUTE_D3D12_GENERIC_ML), but still get the same error.

I also checked some simple model like one layer fc, one layer 1dconv, all failed, but it would not fail if the model directly output the input . Is anything I could do to fix?

Nagico2 commented 1 month ago

Try re-install the latest Windows SDK?

My project also cannot find the definition ofDXCORE_ADAPTER_ATTRIBUTE_D3D12_GENERIC_ML before. But after I install the latest Windows SDK, it seems to compile normally now.

image

WTian-Yu commented 1 month ago

@Nagico2 Hi, thanks for your response, but I could compile and run normally in the original project(regardless Windows OS 23H2 or 24H2 version).

My original question was that once I replaced a simple onnx model (like just 1 global average pooling layer, 1 fc layer or 1 1d conv layer), it failed to create ort session in the run time.

After a few days investigating, I found that it seems that it doesn't support these operator I tried yet, and also I need to convert my model to fp-16 so that I could create session successfully, so at this moment, I convert my model to fp-16 and only use 2d convolution could compile and run normally.

Thanks for your help anyway.