CuriosAI / sai

SAI: a fork of Leela Zero with variable komi.
GNU General Public License v3.0
103 stars 11 forks source link

Hope to support ONNX Runtime (Training version & Inferencing version) and DirectML #149

Open Looong01 opened 1 year ago

Looong01 commented 1 year ago

Hope to support ONNX Runtime (Training version & Inferencing version) and DirectML.

They can optimize the training process and inferring process, if you use it as a back end.

ONNX Runtime supports any OS and any kind of GPU, including CUDA(NVIDIA), ROCm (AMD), oneDNN (Intel), Metal (Apple M1) and other devices as in the following picture. And its performance will be much better than OpenCL. image https://github.com/microsoft/onnxruntime

DirectML supports any kind of GPU on Windows, but its cost of code migrating is much less than it of ONNX Runtime. And its performance will also be better than OpenCL. https://github.com/microsoft/DirectML

Looong01 commented 1 year ago

Hope to add a DirectML back end. It's easy to use. It support any GPU that suppots DirectX12 on Windows. It's much much much better than OpenCL. It does not need the users to install any software or computing platforms (Like CUDA, ROCm, TensorRT, CUDNN, etc.). It only needs a Win10/11 os and its GPU drivers. The only defect is the performance is lower than CUDA+CUDNN+TNESORRT a little.