NVIDIA / TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
https://developer.nvidia.com/tensorrt
Apache License 2.0
10.77k stars 2.13k forks source link

Installation error (on multiple sites but specifically Colab this time) #2243

Open frankvp11 opened 2 years ago

frankvp11 commented 2 years ago

Description

I am trying to build TensorRT from source on Google colab, and i've been running into some errors when I try to cmake ..

Environment

TensorRT Version: None yet NVIDIA GPU: Colab GPU? NVIDIA Driver Version: ? CUDA Version: 11.1 CUDNN Version: ? Operating System: Colab Python Version (if applicable): NA Tensorflow Version (if applicable): NA PyTorch Version (if applicable): NA Baremetal or Container (if so, version): Attempting baremetal

Relevant Files

the TensorRT repo And the GA from the Developer zone (as per instructions)

Steps To Reproduce

Traceback: /content/TensorRT/build Building for TensorRT version: 8.4.1, library version: 8 -- Targeting TRT Platform: x86_64 -- CUDA version set to 11.3.1 -- cuDNN version set to 8.2 -- Protobuf version set to 3.0.0 -- Using libprotobuf /content/TensorRT/build/third_party.protobuf/lib/libprotobuf.a CMake Error at CMakeLists.txt:128 (find_library_create_target): find_library_create_target Macro invoked with incorrect arguments for macro named: find_library_create_target

CMake Error at CMakeLists.txt:129 (find_library_create_target): find_library_create_target Macro invoked with incorrect arguments for macro named: find_library_create_target

-- GPU_ARCHS is not defined. Generating CUDA code for default SMs: 53;60;61;70;75;80;86 -- Protobuf proto/trtcaffe.proto -> proto/trtcaffe.pb.cc proto/trtcaffe.pb.h -- /content/TensorRT/build/parsers/caffe Generated: /content/TensorRT/build/parsers/onnx/third_party/onnx/onnx/onnx_onnx2trt_onnx-ml.proto Generated: /content/TensorRT/build/parsers/onnx/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx-ml.proto Generated: /content/TensorRT/build/parsers/onnx/third_party/onnx/onnx/onnx-data_onnx2trt_onnx.proto -- -- **** Summary **** -- CMake version : 3.22.6 -- CMake command : /usr/local/lib/python3.7/dist-packages/cmake/data/bin/cmake -- System : Linux -- C++ compiler : /usr/bin/g++ -- C++ compiler version : 7.5.0 -- CXX flags : -Wno-deprecated-declarations -DBUILD_SYSTEM=cmake_oss -Wall -Wno-deprecated-declarations -Wno-unused-function -Wnon-virtual-dtor -- Build type : Release -- Compile definitions : _PROTOBUF_INSTALL_DIR=/content/TensorRT/build;SOURCE_LENGTH=18;ONNX_NAMESPACE=onnx2trt_onnx -- CMAKE_PREFIX_PATH : -- CMAKE_INSTALL_PREFIX : /content/TensorRT/build/.. -- CMAKE_MODULE_PATH : -- -- ONNX version : 1.8.0 -- ONNX NAMESPACE : onnx2trt_onnx -- ONNX_BUILD_TESTS : OFF -- ONNX_BUILD_BENCHMARKS : OFF -- ONNX_USE_LITE_PROTO : OFF -- ONNXIFI_DUMMY_BACKEND : OFF -- ONNXIFI_ENABLE_EXT : OFF -- -- Protobuf compiler : -- Protobuf includes : -- Protobuf libraries : -- BUILD_ONNX_PYTHON : OFF -- Found CUDA headers at /usr/local/cuda/include -- Found TensorRT headers at /content/TensorRT/include -- Find TensorRT libs at TENSORRT_LIBRARY_INFER-NOTFOUND;TENSORRT_LIBRARY_INFER_PLUGIN-NOTFOUND -- Could NOT find TENSORRT (missing: TENSORRT_LIBRARY) ERRORCannot find TensorRT library. ONNX_INCLUDE_DIR -- Adding new sample: sample_algorithm_selector -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_char_rnn -- - Parsers Used: uff;caffe;onnx -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_dynamic_reshape -- - Parsers Used: onnx -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_fasterRCNN -- - Parsers Used: caffe -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_googlenet -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_int8 -- - Parsers Used: caffe -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_int8_api -- - Parsers Used: onnx -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_mnist -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_mnist_api -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_onnx_mnist -- - Parsers Used: onnx -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_io_formats -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_ssd -- - Parsers Used: caffe -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_uff_fasterRCNN -- - Parsers Used: uff -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_uff_maskRCNN -- - Parsers Used: uff -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_uff_mnist -- - Parsers Used: uff -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_uff_plugin_v2_ext -- - Parsers Used: uff -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_uff_ssd -- - Parsers Used: uff -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_onnx_mnist_coord_conv_ac -- - Parsers Used: onnx -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: trtexec -- - Parsers Used: caffe;uff;onnx -- - InferPlugin Used: ON -- - Licensing: samples -- Configuring incomplete, errors occurred! See also "/content/TensorRT/build/CMakeFiles/CMakeOutput.log". See also "/content/TensorRT/build/CMakeFiles/CMakeError.log". Building for TensorRT version: 8.4.1, library version: 8 -- Targeting TRT Platform: x86_64 -- CUDA version set to 11.3.1 -- cuDNN version set to 8.2 -- Protobuf version set to 3.0.0 -- Using libprotobuf /content/TensorRT/build/third_party.protobuf/lib/libprotobuf.a CMake Error at CMakeLists.txt:128 (find_library_create_target): find_library_create_target Macro invoked with incorrect arguments for macro named: find_library_create_target

CMake Error at CMakeLists.txt:129 (find_library_create_target): find_library_create_target Macro invoked with incorrect arguments for macro named: find_library_create_target

-- GPU_ARCHS is not defined. Generating CUDA code for default SMs: 53;60;61;70;75;80;86 -- Protobuf proto/trtcaffe.proto -> proto/trtcaffe.pb.cc proto/trtcaffe.pb.h -- /content/TensorRT/build/parsers/caffe Generated: /content/TensorRT/build/parsers/onnx/third_party/onnx/onnx/onnx_onnx2trt_onnx-ml.proto Generated: /content/TensorRT/build/parsers/onnx/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx-ml.proto Generated: /content/TensorRT/build/parsers/onnx/third_party/onnx/onnx/onnx-data_onnx2trt_onnx.proto -- -- **** Summary **** -- CMake version : 3.22.6 -- CMake command : /usr/local/lib/python3.7/dist-packages/cmake/data/bin/cmake -- System : Linux -- C++ compiler : /usr/bin/g++ -- C++ compiler version : 7.5.0 -- CXX flags : -Wno-deprecated-declarations -DBUILD_SYSTEM=cmake_oss -Wall -Wno-deprecated-declarations -Wno-unused-function -Wnon-virtual-dtor -- Build type : Release -- Compile definitions : _PROTOBUF_INSTALL_DIR=/content/TensorRT/build;SOURCE_LENGTH=18;ONNX_NAMESPACE=onnx2trt_onnx -- CMAKE_PREFIX_PATH : -- CMAKE_INSTALL_PREFIX : /content/TensorRT/build/.. -- CMAKE_MODULE_PATH : -- -- ONNX version : 1.8.0 -- ONNX NAMESPACE : onnx2trt_onnx -- ONNX_BUILD_TESTS : OFF -- ONNX_BUILD_BENCHMARKS : OFF -- ONNX_USE_LITE_PROTO : OFF -- ONNXIFI_DUMMY_BACKEND : OFF -- ONNXIFI_ENABLE_EXT : OFF -- -- Protobuf compiler : -- Protobuf includes : -- Protobuf libraries : -- BUILD_ONNX_PYTHON : OFF -- Found CUDA headers at /usr/local/cuda/include -- Found TensorRT headers at /content/TensorRT/include -- Find TensorRT libs at TENSORRT_LIBRARY_INFER-NOTFOUND;TENSORRT_LIBRARY_INFER_PLUGIN-NOTFOUND -- Could NOT find TENSORRT (missing: TENSORRT_LIBRARY) ERRORCannot find TensorRT library. ONNX_INCLUDE_DIR -- Adding new sample: sample_algorithm_selector -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_char_rnn -- - Parsers Used: uff;caffe;onnx -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_dynamic_reshape -- - Parsers Used: onnx -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_fasterRCNN -- - Parsers Used: caffe -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_googlenet -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_int8 -- - Parsers Used: caffe -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_int8_api -- - Parsers Used: onnx -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_mnist -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_mnist_api -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_onnx_mnist -- - Parsers Used: onnx -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_io_formats -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_ssd -- - Parsers Used: caffe -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_uff_fasterRCNN -- - Parsers Used: uff -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_uff_maskRCNN -- - Parsers Used: uff -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_uff_mnist -- - Parsers Used: uff -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_uff_plugin_v2_ext -- - Parsers Used: uff -- - InferPlugin Used: OFF -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_uff_ssd -- - Parsers Used: uff -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: sample_onnx_mnist_coord_conv_ac -- - Parsers Used: onnx -- - InferPlugin Used: ON -- - Licensing: samples ONNX_INCLUDE_DIR -- Adding new sample: trtexec -- - Parsers Used: caffe;uff;onnx -- - InferPlugin Used: ON -- - Licensing: samples -- Configuring incomplete, errors occurred! See also "/content/TensorRT/build/CMakeFiles/CMakeOutput.log". See also "/content/TensorRT/build/CMakeFiles/CMakeError.log". Makefile:783: recipe for target 'cmake_check_build_system' failed make: *** [cmake_check_build_system] Error 1

Command: !cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=pwd/out !make -j$(nproc)

Other Revelant Info: I got similar (if not the same) using other cloud sites such as Kaggle. This is specific to Colab though, as its free and simple (For me). If you don't know how to help me maybe advice on how to get TensorRT on colab would be nice? I dont want just "read instructions from home page" As that hasn't been working so far.

frankvp11 commented 2 years ago

CMakeError.txt CmakeOutput.txt Theres the 2 logs. Knock ya self out (figuratively ofc)

zerollzeng commented 2 years ago

On your CMakeError.txt.

/usr/local/lib/python3.7/dist-packages/cmake/data/bin/cmake -E cmake_link_script CMakeFiles/cmTC_64595.dir/link.txt --verbose=1
/usr/bin/g++ -Wno-deprecated-declarations  -DBUILD_SYSTEM=cmake_oss -DCHECK_FUNCTION_EXISTS=pthread_create CMakeFiles/cmTC_64595.dir/CheckFunctionExists.cxx.o -o cmTC_64595  -lpthreads 
/usr/bin/ld: cannot find -lpthreads

It's an env issue, please try to solve it on your own.

frankvp11 commented 2 years ago

Alright thanks for letting me know. Ill look into it more.

frankvp11 commented 2 years ago

yo @zerollzeng I got something working in colab. Sorry for the @ but I was just wondering. I got it installed (i think) on colab. But when I do import tensorrt I got module not found error. However, when I do import TensorRT it gives me the OK. Is that correct? and will it work? subject to more testing

frankvp11 commented 2 years ago

Also, !dpkg -l | grep TensorRT gives: ii graphsurgeon-tf 8.4.2-1+cuda11.6 amd64 GraphSurgeon for TensorRT package ii libnvinfer-bin 8.4.2-1+cuda11.6 amd64 TensorRT binaries ii libnvinfer-dev 8.4.2-1+cuda11.6 amd64 TensorRT development libraries and headers ii libnvinfer-plugin-dev 8.4.2-1+cuda11.6 amd64 TensorRT plugin libraries ii libnvinfer-plugin8 8.4.2-1+cuda11.6 amd64 TensorRT plugin libraries ii libnvinfer-samples 8.4.2-1+cuda11.6 all TensorRT samples ii libnvinfer5 5.1.2-1+cuda10.0 amd64 TensorRT runtime libraries ii libnvinfer8 8.4.2-1+cuda11.6 amd64 TensorRT runtime libraries ii libnvonnxparsers-dev 8.4.2-1+cuda11.6 amd64 TensorRT ONNX libraries ii libnvonnxparsers8 8.4.2-1+cuda11.6 amd64 TensorRT ONNX libraries ii libnvparsers-dev 8.4.2-1+cuda11.6 amd64 TensorRT parsers libraries ii libnvparsers8 8.4.2-1+cuda11.6 amd64 TensorRT parsers libraries ii python3-libnvinfer 8.4.2-1+cuda11.6 amd64 Python 3 bindings for TensorRT ii python3-libnvinfer-dev 8.4.2-1+cuda11.6 amd64 Python 3 development package for TensorRT ii tensorrt 8.4.2.4-1+cuda11.6 amd64 Meta package for TensorRT ii uff-converter-tf 8.4.2-1+cuda11.6 amd64 UFF converter for TensorRT package

frankvp11 commented 2 years ago

^Also does that mean I have the most recent version? The 8.4.2.1 thats infront of the +cuda11.6?

frankvp11 commented 2 years ago

@zerollzeng can I make a new post saying something along the lines of Importing on colab and then give the script that I used? It might be useful for others? I can share it with you if you'd like aswell beforehand

elvinagam commented 2 years ago

similar issue.