chaiNNer-org / chaiNNer

A node-based image processing GUI aimed at making chaining image processing tasks easy and customizable. Born as an AI upscaling application, chaiNNer has grown into an extremely flexible and powerful programmatic image processing application.
https://chaiNNer.app
GNU General Public License v3.0
4.49k stars 280 forks source link

DirectML Execution Provider for ONNX Runtime #2923

Open Artoriuz opened 4 months ago

Artoriuz commented 4 months ago

Motivation After installing ChaiNNer and ONNX Runtime, I don't see an option to run it on AMD/Intel GPUs, which I assume is because the DirectML Execution Provider isn't available.

Description It would be nice to have the option of using AMD/Intel GPUs with ONNX Runtime on ChaiNNer.

Alternatives Currently, the best alternative would be using NCNN, but support isn't 1:1 and some operations are unsupported. You can also run ORT on the CPU, but that's much slower depending on the model.

joeyballentine commented 4 months ago

i'll look into how easy it would be to add this

Disonantemus commented 2 months ago

Motivation After installing ChaiNNer and ONNX Runtime, I don't see an option to run it on AMD/Intel GPUs, which I assume is because the DirectML Execution Provider isn't available.

Description It would be nice to have the option of using AMD/Intel GPUs with ONNX Runtime on ChaiNNer.

Alternatives Currently, the best alternative would be using NCNN, but support isn't 1:1 and some operations are unsupported. You can also run ORT on the CPU, but that's much slower depending on the model.

This helps with older AMD GPU (mine is: RTX 580), that hasn't ROCm support? (right now only supports just a few newer GPU).

Artoriuz commented 2 months ago

Motivation After installing ChaiNNer and ONNX Runtime, I don't see an option to run it on AMD/Intel GPUs, which I assume is because the DirectML Execution Provider isn't available. Description It would be nice to have the option of using AMD/Intel GPUs with ONNX Runtime on ChaiNNer. Alternatives Currently, the best alternative would be using NCNN, but support isn't 1:1 and some operations are unsupported. You can also run ORT on the CPU, but that's much slower depending on the model.

This helps with older AMD GPU (mine is: RTX 580), that hasn't ROCm support? (right now only supports just a few newer GPU).

DirectML works without ROCm, it's Microsoft's solution to make ML on windows more well supported. It should work on any GPU with DX12 support.

arinanto commented 3 weeks ago

Is this effort still ongoing? Is there any way to add it manually?

nicastel commented 3 weeks ago

there is a pytorch directml provider as well (torch-directml) that could provide pytorch HW acceleration on any GPU in windows