for windows, ONNXRuntime with CUDA Execution Provider is faster than DirectML EP.
So, I plan to release with this feature.
for linux, there is no GPU supported inference now. So it also planned.
for Mac, no GPU support. now. M1 Mac has CoreML support. I plan to support it. But I don't have M1 mac.
Intel mac has no useful inference support devices.
for windows, ONNXRuntime with CUDA Execution Provider is faster than DirectML EP. So, I plan to release with this feature.
for linux, there is no GPU supported inference now. So it also planned.
for Mac, no GPU support. now. M1 Mac has CoreML support. I plan to support it. But I don't have M1 mac. Intel mac has no useful inference support devices.