Closed Aziks0 closed 2 years ago
Thanks! I am not entirely sure how to best handle this issue. On Linux this explicit setting is not necessary (yet?). Your fixed version emits a
UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'CPUExecutionProvider'
on my machine, which does not have CUDA installed. It does run fine aside from this warning but it would in general be better to make the provider user-configurable, since automatically choosing the "best" one is apparently deliberately not supported by onnxruntime. CUDAExecutionProvider and CPUExecutionProvider are not always the best choice. For example on Intel CPUs the OneDNN provider could be better or on CUDA-capable systems the TensorRT provider could be better.
I'll merge this now but in the long term we should introduce this choice as an additional CLI arg and/or envvar or find another solution.
Fix for #21
I simply added the providers
CUDAExecutionProvider
andCPUExecutionProvider
as arguments for theonnxruntime.InferenceSession
class instantiation.