cyrusbehr / tensorrt-cpp-api

TensorRT C++ API Tutorial
MIT License
543 stars 66 forks source link

opencv compilation error #32

Closed all-for-code closed 7 months ago

all-for-code commented 7 months ago
I download the docker image from nivida/cuda with the command : sudo docker pull nvidia/cuda:12.0.0-cudnn8-devel-ubuntu22.04

Through image generated after container, I can't find in the container file '/usr/local/cuda/lib64/libcudnn.So‘, how can I solve this problem?
cyrusbehr commented 7 months ago

Hi @all-for-code

You need to modify the CUDNN_INCLUDE_DIR and CUDNN_LIBRARY parameters in your script to point to the correct directory.

image

Here's a tip for future. If you don't know where a library is located, you can search your entire file system to find it: find / | grep libcudnn.so.

On the image you provided, it returns /usr/lib/x86_64-linux-gnu/libcudnn.so.

If you search for cudnn.h: find / | grep cudnn.h it returns usr/include/cudnn.h.

You therefore need to set CUDNN_INCLUDE_DIR=usr/include/ and CUDNN_LIBRARY=/usr/lib/x86_64-linux-gnu/libcudnn.so

all-for-code commented 7 months ago

Hi @cyrusbehr Thanks for your replay! I have used another method to solve this problem, although I did not try the method you said, but it should be effective. 1700624008618

cyrusbehr commented 7 months ago

Yeah either one works. At the end of the day, you need to point CUDNN_INCLUDE_DIR and CUDNN_LIBRARY to the correct locations, that's all.

all-for-code commented 7 months ago

Hi @cyrusbehr I have another question I compiled opencv4.8 using build_open.sh, but there were problems when I used cpp inference code provided by ultralytics for inference. cpu inference had no problem, but cuda inference output was null. So I suspect there is a compilation problem with opencv4.8 or an opencv version mismatch. I have submitted an issue to ultrayltics, do you have any good suggestions on this issue? Thank you very much!! Code chaining:https://github.com/ultralytics/ultralytics/tree/main/examples/YOLOv8-ONNXRuntime-CPP