cyrusbehr / tensorrt-cpp-api

TensorRT C++ API Tutorial
MIT License
577 stars 72 forks source link

NvInfer.h: No such file or directory #14

Closed pva22 closed 1 year ago

pva22 commented 1 year ago

Try to run: g++ /home/ubuntu/tensorrt-cpp-api/src/main.cpp

and get error: In file included from /home/ubuntu/tensorrt-cpp-api/src/main.cpp:1: /home/ubuntu/tensorrt-cpp-api/src/engine.h:7:10: fatal error: NvInfer.h: No such file or directory 7 | #include "NvInfer.h" | ^~~~~~~~~~~ compilation terminated.

NvInfer.h located in /home/ubuntu/TensorRT-8.6.1.6/include

pva22 commented 1 year ago

run: g++ /home/ubuntu/tensorrt-cpp-api/src/main.cpp -I/home/ubuntu/TensorRT-8.6.1.6/include

get new error:

/home/ubuntu/TensorRT-8.6.1.6/include/NvInferRuntimeBase.h:19:10: fatal error: cuda_runtime_api.h: No such file or directory 19 | #include <cuda_runtime_api.h> | ^~~~~~~~~~~~~~~~~~~~ compilation terminated.

What am I doing wrong?

pva22 commented 1 year ago

Working for me:

sudo g++ /home/ubuntu/tensorrt-cpp-api/src/main.cpp pkg-config opencv --cflags --libs -I/home/ubuntu/TensorRT-8.6.1.6/include -I/usr/local/cuda-11.8/include

cyrusbehr commented 1 year ago

@pva22 use the provided CMakeLists.txt file and follow the build instructions in the readme file

pva22 commented 1 year ago

@pva22 use the provided CMakeLists.txt file and follow the build instructions in the readme file

I have done all the steps but don't know how to run main.cpp. Run: sudo g++ /home/ubuntu/tensorrt-cpp-api/src/main.cpp `pkg-config opencv --cflags --libs` -I/home/ubuntu/TensorRT-8.6.1.6/include -I/usr/local/cuda-11.8/include

And get:

In file included from /home/ubuntu/TensorRT-8.6.1.6/include/NvInferRuntimeCommon.h:27,
                 from /home/ubuntu/TensorRT-8.6.1.6/include/NvInferLegacyDims.h:16,
                 from /home/ubuntu/TensorRT-8.6.1.6/include/NvInfer.h:16,
                 from /home/ubuntu/tensorrt-cpp-api/src/engine.h:7,
                 from /home/ubuntu/tensorrt-cpp-api/src/main.cpp:1:
/home/ubuntu/TensorRT-8.6.1.6/include/NvInferRuntimePlugin.h:97:22: note: declared here
   97 | class TRT_DEPRECATED IPluginV2
      |                      ^~~~~~~~~
/usr/bin/ld: /tmp/ccVp3p1Q.o: in function `main':
main.cpp:(.text+0xb7): undefined reference to `Engine::Engine(Options const&)'
/usr/bin/ld: main.cpp:(.text+0x124): undefined reference to `Engine::build(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'
/usr/bin/ld: main.cpp:(.text+0x18b): undefined reference to `Engine::loadNetwork()'
/usr/bin/ld: main.cpp:(.text+0x63c): undefined reference to `Engine::runInference(std::vector<std::vector<cv::cuda::GpuMat, std::allocator<cv::cuda::GpuMat> >, std::allocator<std::vector<cv::cuda::GpuMat, std::allocator<cv::cuda::GpuMat> > > > const&, std::vector<std::vector<std::vector<float, std::allocator<float> >, std::allocator<std::vector<float, std::allocator<float> > > >, std::allocator<std::vector<std::vector<float, std::allocator<float> >, std::allocator<std::vector<float, std::allocator<float> > > > > >&, std::array<float, 3ul> const&, std::array<float, 3ul> const&, bool)'
/usr/bin/ld: main.cpp:(.text+0x6ff): undefined reference to `Engine::runInference(std::vector<std::vector<cv::cuda::GpuMat, std::allocator<cv::cuda::GpuMat> >, std::allocator<std::vector<cv::cuda::GpuMat, std::allocator<cv::cuda::GpuMat> > > > const&, std::vector<std::vector<std::vector<float, std::allocator<float> >, std::allocator<std::vector<float, std::allocator<float> > > >, std::allocator<std::vector<std::vector<float, std::allocator<float> >, std::allocator<std::vector<float, std::allocator<float> > > > > >&, std::array<float, 3ul> const&, std::array<float, 3ul> const&, bool)'
/usr/bin/ld: main.cpp:(.text+0xaee): undefined reference to `Engine::~Engine()'
/usr/bin/ld: main.cpp:(.text+0xcf1): undefined reference to `Engine::~Engine()'
collect2: error: ld returned 1 exit status
cyrusbehr commented 1 year ago

@pva22 I would first brush up on your understanding of compiling vs running executable (it seems the knowledge is lacking, the command you are trying to run is for manually compiling an executable).

After running these commands: image

You will see it generates the following artifacts: image

If you look at the CMakeLists.txt file here you will see that we name our executable driver.

To run, run ./driver.

pva22 commented 1 year ago

@cyrusbehr thanks for the explanation

run ./driver and get:

CUDA Module Loading Mode is eager
Searching for engine file with name: arcfaceresnet100-8.engine.NVIDIAA2.fp16.1.1.4000000000
Engine not found, generating. This could take a while...
CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
terminate called after throwing an instance of 'std::runtime_error'
  what():  Unable to build TRT engine.
Aborted (core dumped)

I am using TensorRT version-8.6.1.6 on Tesla A2, cuda 11.8. Can you please tell me which version of TensorRT you are using?

cyrusbehr commented 1 year ago

Your TensorRT and CUDA versions are correct. Please change verbosity level, recompile, and re-run, then paste full output: https://github.com/cyrusbehr/tensorrt-cpp-api#how-to-debug

all-for-code commented 8 months ago

Your TensorRT and CUDA versions are correct. Please change verbosity level, recompile, and re-run, then paste full output: https://github.com/cyrusbehr/tensorrt-cpp-api#how-to-debug

@cyrusbehr Hi, my CUDA lazy loading is not enabled。 image I am using TensorRT version-8.6.0.12 on RTX3090, cuda 12.0, How do I solve this problem

cyrusbehr commented 8 months ago

This is not a problem, only a warning for potentially speeding things up. That being said, just click the link there to the right of the warning and it will tell you how to do so. You just need to export and environment variable: image