This adds support for onnxruntime-rocm to the onnx backend. Note that not all amd gpus are supported, see supported gpus. Notably, tests with a 7900XTX failed, but it is likely those gpus will be added soon (https://github.com/ROCmSoftwarePlatform/MIOpen/issues/2097#issuecomment-1542364392).
During testing we also found the onnxruntime Run() call requires locking (like DirectML) even though this is undocumented.
Finally there is support for building lc0 with a locally built onnxruntime, as the header files are in different locations.
This adds support for onnxruntime-rocm to the onnx backend. Note that not all amd gpus are supported, see supported gpus. Notably, tests with a 7900XTX failed, but it is likely those gpus will be added soon (https://github.com/ROCmSoftwarePlatform/MIOpen/issues/2097#issuecomment-1542364392). During testing we also found the onnxruntime
Run()
call requires locking (like DirectML) even though this is undocumented. Finally there is support for building lc0 with a locally built onnxruntime, as the header files are in different locations.