cyrusbehr / tensorrt-cpp-api

TensorRT C++ API Tutorial
MIT License
543 stars 66 forks source link

OpenCV 4.8 compatibility issue? #35

Closed tonydavis629 closed 7 months ago

tonydavis629 commented 7 months ago

I've been having a very difficult time setting up the proper environment to execute run_inference_benchmark. It seems to have an issue with my OpenCV build.

 davis@tony2:~/tensorrt-cpp-api/build$ ./run_inference_benchmark ../models/yolov8n.onnx 
Searching for engine file with name: yolov8n.engine.TeslaT4.fp16.1.1
Engine found, not regenerating...
terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.8.0) /home/davisac1/tensorrt-cpp-api/scripts/opencv_contrib-4.8.0/modules/cudev/include/opencv2/cudev/grid/detail/transform.hpp:264: error: (-217:Gpu API call) no kernel image is available for execution on the device in function 'call'

Aborted (core dumped)

Looking up this error seems to indicate a lot of people suggesting the CUDA_ARCH_BIN value for the OpenCV build should be 8.0, or 8.7, or something else. Changing that value doesn't seem to help. Any suggestions?

cyrusbehr commented 7 months ago

You can't set CUDA_ARCH_BIN to some arbitrary value. It needs to be set to the value required for your GPU. Find the compute capability for your specific GPU here: https://developer.nvidia.com/cuda-gpus

tonydavis629 commented 7 months ago

Also noting I made it work by simplifying removing the argument as it has a built in compatability check.