Closed ekaterinatretyak closed 2 years ago
Hi,
After having obtained ONNX models (not quantized), I would like to run inference on GPU devices with setting onnx runtime:
model_sessions = get_onnx_runtime_sessions(model_paths, default=False, provider=['CUDAExecutionProvider'])
However, I get the following error:
Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
I checked that all dependencies are installed. How could I fix it? Thanks in advance for answer
make sure to uninstall onnxruntime and install the right version of onnxruntime-gpu depending on the cuda and cudnn versions available on your device
onnxruntime
onnxruntime-gpu
Hi,
After having obtained ONNX models (not quantized), I would like to run inference on GPU devices with setting onnx runtime:
model_sessions = get_onnx_runtime_sessions(model_paths, default=False, provider=['CUDAExecutionProvider'])
However, I get the following error:
Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
I checked that all dependencies are installed. How could I fix it? Thanks in advance for answer