But when I execute./run1, after successful compilation, I get
The code runs fine on CPU (i.e. without the lines OrtCUDAProviderOptions cuda_options{0}; sessionOptions.AppendExecutionProvider_CUDA(cuda_options);)
The issue is while inferencing on GPU. Any suggestion would be helpful.!
What GPU is installed? (you can try running nvidia-smi -L on the command line to see). Usually it's because there is no GPU available or the GPU is too old.
Description I built onnxruntime (v1.8.2) from source following these instructions. Used the following command to build and install
I'm using
linear-regression.onnx
model, downloaded from here, to perform inferencing in C++ on a GPU.System information
To Reproduce When I compile the following piece of code with
g++ -o run1 inference.cpp -I/usr/local/include/onnxruntime/core/session/ -lonnxruntime
I don't get any compilation error.
But when I execute
./run1
, after successful compilation, I getThe code runs fine on CPU (i.e. without the lines
OrtCUDAProviderOptions cuda_options{0}; sessionOptions.AppendExecutionProvider_CUDA(cuda_options);
) The issue is while inferencing on GPU. Any suggestion would be helpful.!