NVIDIA-AI-IOT / deepstream_tao_apps

Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
MIT License
380 stars 97 forks source link

Build TensorRT Plugin (libnvinfer_plugin.so.7.1.3) error: No CMAKE_CUDA_COMPILER could be found #31

Closed Ryan-ZL-Lin closed 3 years ago

Ryan-ZL-Lin commented 3 years ago

Hi I tried to follow the instructions right HERE to build the plugin on Jetson, here is my environment

Device: Jetson Nano JetPack: 4.4 TensorRT: 7.1.3 CUDA: 10.2

I upgraded CMake to 3.13.5, and got an error while runring the following command under the path "~/TensorRT/build" /usr/local/bin/cmake .. -DGPU_ARCHS="53 62 72" -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out

Here is the error

Building for TensorRT version: 7.1.3, library version: 7 -- The CUDA compiler identification is unknown CMake Error at CMakeLists.txt:46 (project): No CMAKE_CUDA_COMPILER could be found.

Tell CMake where to find the compiler by setting either the environment variable "CUDACXX" or the CMake cache entry CMAKE_CUDA_COMPILER to the full path to the compiler, or to the compiler name if it is in the PATH.

-- Configuring incomplete, errors occurred! See also "/home/jetbot/TensorRT/build/CMakeFiles/CMakeOutput.log". See also "/home/jetbot/TensorRT/build/CMakeFiles/CMakeError.log".

Does anyone know the possible root cuases? Thanks in Advance.

Ryan-ZL-Lin commented 3 years ago

Problem is solved. I download the precompiled.so file from https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS/Jetson/TRT7.1