Open liuhao123t opened 1 year ago
after build, it can sucess run in cpu, but not in gpu
it will generate the follow errors:
root@4dbec2b03d4e:/ssd/liuhao/yolov5-onnxruntime/build# ./yolo_ort --model_path ../models/yolov5m.onnx --image ../images/bus.jpg --class_names ../models/coco.names --gpu Inference device: GPU /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:115 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true] CUDA failure 101: invalid device ordinal ; GPU=0 ; hostname=4dbec2b03d4e ; expr=cudaSetDevice(info_.device_id);
my envir is: ubuntu 18.04 cuda 11.03 onnxruntime x64-gpu-1.8.0
i solve this problem by change onnxruntime vertion to 1.10.0
after build, it can sucess run in cpu, but not in gpu
it will generate the follow errors:
root@4dbec2b03d4e:/ssd/liuhao/yolov5-onnxruntime/build# ./yolo_ort --model_path ../models/yolov5m.onnx --image ../images/bus.jpg --class_names ../models/coco.names --gpu Inference device: GPU /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:115 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true] CUDA failure 101: invalid device ordinal ; GPU=0 ; hostname=4dbec2b03d4e ; expr=cudaSetDevice(info_.device_id);
my envir is: ubuntu 18.04 cuda 11.03 onnxruntime x64-gpu-1.8.0