If run the code below,will directory of pwd be created?
git clone -b release/6.0 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive
I created the directory of pwd after running below code.
Then I continue run the code below.
export TRT_SOURCE=pwd cd $TRT_SOURCE mkdir -p build && cd build cmake .. -DGPU_ARCHS=61 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out -DCUDA_VERSION=10.0
It shows below.
Building for TensorRT version: 6.0.1.0, library version: 6.0.1
-- Targeting TRT Platform: x86_64
-- CUDA version set to 10.0
-- cuDNN version set to 7.5
-- Protobuf version set to 3.0.0
-- Found CUDA: /usr/local/cuda-10.0 (found suitable version "10.0", minimum required is "10.0")
-- Using libprotobuf /DATA/hhyang/tools/TensorRT/build/third_party.protobuf/lib/libprotobuf.a
-- ========================= Importing and creating target nvinfer ==========================
-- Looking for library nvinfer
-- Library that was found nvinfer_LIB_PATH-NOTFOUND
-- ========================= Importing and creating target nvuffparser ==========================
-- Looking for library nvparsers
-- Library that was found nvparsers_LIB_PATH-NOTFOUND
-- Protobuf proto/trtcaffe.proto -> proto/trtcaffe.pb.cc proto/trtcaffe.pb.h
-- /DATA/hhyang/tools/TensorRT/build/parsers/caffe
Summary
-- CMake version : 3.13.5
-- CMake command : /home/hhyang/cmake/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/g++
-- C++ compiler version : 5.4.0
-- CXX flags : -Wno-deprecated-declarations -DBUILD_SYSTEM=cmake_oss -Wall -Wno-deprecated-declarations -Wno-unused-function -Wno-unused-but-set-variable -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : _PROTOBUF_INSTALL_DIR=/DATA/hhyang/tools/TensorRT/build;ONNX_NAMESPACE=onnx2trt_onnx
-- CMAKE_PREFIX_PATH :
-- CMAKE_INSTALL_PREFIX : /usr/lib/aarch64-linux-gnu/..
-- CMAKE_MODULE_PATH :
-- ONNX version : 1.3.0
-- ONNX NAMESPACE : onnx2trt_onnx
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNX_USE_LITE_PROTO : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
-- GPU_ARCH defined as 61. Generating CUDA code for SM 61
-- Found TensorRT headers at /DATA/hhyang/tools/TensorRT/include
-- Find TensorRT libs at /DATA/hhyang/tools/TensorRT-6.0.1.5/lib/libnvinfer.so;/DATA/hhyang/tools/TensorRT-6.0.1.5/lib/libnvinfer_plugin.so
-- Adding new sample: sample_char_rnn
-- - Parsers Used: none
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_dynamic_reshape
-- - Parsers Used: onnx
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_fasterRCNN
-- - Parsers Used: caffe
-- - InferPlugin Used: ON
-- - Licensing: opensource
-- Adding new sample: sample_googlenet
-- - Parsers Used: caffe
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_int8
-- - Parsers Used: caffe
-- - InferPlugin Used: ON
-- - Licensing: opensource
-- Adding new sample: sample_int8_api
-- - Parsers Used: onnx
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_mlp
-- - Parsers Used: caffe
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_mnist
-- - Parsers Used: caffe
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_mnist_api
-- - Parsers Used: caffe
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_movielens
-- - Parsers Used: uff
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_movielens_mps
-- - Parsers Used: uff
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_nmt
-- - Parsers Used: none
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_onnx_mnist
-- - Parsers Used: onnx
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_plugin
-- - Parsers Used: caffe
-- - InferPlugin Used: ON
-- - Licensing: opensource
-- Adding new sample: sample_reformat_free_io
-- - Parsers Used: caffe
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_ssd
-- - Parsers Used: caffe
-- - InferPlugin Used: ON
-- - Licensing: opensource
-- Adding new sample: sample_uff_fasterRCNN
-- - Parsers Used: uff
-- - InferPlugin Used: ON
-- - Licensing: opensource
-- Adding new sample: sample_uff_maskRCNN
-- - Parsers Used: uff
-- - InferPlugin Used: ON
-- - Licensing: opensource
-- Adding new sample: sample_uff_mnist
-- - Parsers Used: uff
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_uff_plugin_v2_ext
-- - Parsers Used: uff
-- - InferPlugin Used: OFF
-- - Licensing: opensource
-- Adding new sample: sample_uff_ssd
-- - Parsers Used: uff
-- - InferPlugin Used: ON
-- - Licensing: opensource
-- Adding new sample: trtexec
-- - Parsers Used: caffe;uff;onnx
-- - InferPlugin Used: ON
-- - Licensing: opensource
-- Configuring done
-- Generating done
-- Build files have been written to: /DATA/hhyang/tools/TensorRT/build
`
Things seems going in the right direction. But when I run the below code, the error occured!
make nvinfer_plugin -j$(nproc)
make[3]: No rule to make target 'nvinfer_LIB_PATH-NOTFOUND', needed by 'plugin/CMakeFiles/nvinfer_plugin.dir/cmake_device_link.o'. Stop.
CMakeFiles/Makefile2:283: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/all' failed
make[2]: [plugin/CMakeFiles/nvinfer_plugin.dir/all] Error 2
CMakeFiles/Makefile2:295: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/rule' failed
make[1]: [plugin/CMakeFiles/nvinfer_plugin.dir/rule] Error 2
Makefile:238: recipe for target 'nvinfer_plugin' failed
make: [nvinfer_plugin] Error 2
I can't figure out the reason.
Ang suggestion will be appreciate!
Thank you for your patient.
If run the code below,will directory of pwd be created?
git clone -b release/6.0 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive
I created the directory of pwd after running below code. Then I continue run the code below.export TRT_SOURCE=
pwdcd $TRT_SOURCE mkdir -p build && cd build cmake .. -DGPU_ARCHS=61 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=
pwd/out -DCUDA_VERSION=10.0
It shows below.Building for TensorRT version: 6.0.1.0, library version: 6.0.1 -- Targeting TRT Platform: x86_64 -- CUDA version set to 10.0 -- cuDNN version set to 7.5 -- Protobuf version set to 3.0.0 -- Found CUDA: /usr/local/cuda-10.0 (found suitable version "10.0", minimum required is "10.0") -- Using libprotobuf /DATA/hhyang/tools/TensorRT/build/third_party.protobuf/lib/libprotobuf.a -- ========================= Importing and creating target nvinfer ========================== -- Looking for library nvinfer -- Library that was found nvinfer_LIB_PATH-NOTFOUND -- ========================= Importing and creating target nvuffparser ========================== -- Looking for library nvparsers -- Library that was found nvparsers_LIB_PATH-NOTFOUND -- Protobuf proto/trtcaffe.proto -> proto/trtcaffe.pb.cc proto/trtcaffe.pb.h -- /DATA/hhyang/tools/TensorRT/build/parsers/caffe Summary -- CMake version : 3.13.5 -- CMake command : /home/hhyang/cmake/bin/cmake -- System : Linux -- C++ compiler : /usr/bin/g++ -- C++ compiler version : 5.4.0 -- CXX flags : -Wno-deprecated-declarations -DBUILD_SYSTEM=cmake_oss -Wall -Wno-deprecated-declarations -Wno-unused-function -Wno-unused-but-set-variable -Wnon-virtual-dtor -- Build type : Release -- Compile definitions : _PROTOBUF_INSTALL_DIR=/DATA/hhyang/tools/TensorRT/build;ONNX_NAMESPACE=onnx2trt_onnx -- CMAKE_PREFIX_PATH : -- CMAKE_INSTALL_PREFIX : /usr/lib/aarch64-linux-gnu/.. -- CMAKE_MODULE_PATH : -- ONNX version : 1.3.0 -- ONNX NAMESPACE : onnx2trt_onnx -- ONNX_BUILD_TESTS : OFF -- ONNX_BUILD_BENCHMARKS : OFF -- ONNX_USE_LITE_PROTO : OFF -- ONNXIFI_DUMMY_BACKEND : OFF -- Protobuf compiler : -- Protobuf includes : -- Protobuf libraries : -- BUILD_ONNX_PYTHON : OFF -- GPU_ARCH defined as 61. Generating CUDA code for SM 61 -- Found TensorRT headers at /DATA/hhyang/tools/TensorRT/include -- Find TensorRT libs at /DATA/hhyang/tools/TensorRT-6.0.1.5/lib/libnvinfer.so;/DATA/hhyang/tools/TensorRT-6.0.1.5/lib/libnvinfer_plugin.so -- Adding new sample: sample_char_rnn -- - Parsers Used: none -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_dynamic_reshape -- - Parsers Used: onnx -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_fasterRCNN -- - Parsers Used: caffe -- - InferPlugin Used: ON -- - Licensing: opensource -- Adding new sample: sample_googlenet -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_int8 -- - Parsers Used: caffe -- - InferPlugin Used: ON -- - Licensing: opensource -- Adding new sample: sample_int8_api -- - Parsers Used: onnx -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_mlp -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_mnist -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_mnist_api -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_movielens -- - Parsers Used: uff -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_movielens_mps -- - Parsers Used: uff -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_nmt -- - Parsers Used: none -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_onnx_mnist -- - Parsers Used: onnx -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_plugin -- - Parsers Used: caffe -- - InferPlugin Used: ON -- - Licensing: opensource -- Adding new sample: sample_reformat_free_io -- - Parsers Used: caffe -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_ssd -- - Parsers Used: caffe -- - InferPlugin Used: ON -- - Licensing: opensource -- Adding new sample: sample_uff_fasterRCNN -- - Parsers Used: uff -- - InferPlugin Used: ON -- - Licensing: opensource -- Adding new sample: sample_uff_maskRCNN -- - Parsers Used: uff -- - InferPlugin Used: ON -- - Licensing: opensource -- Adding new sample: sample_uff_mnist -- - Parsers Used: uff -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_uff_plugin_v2_ext -- - Parsers Used: uff -- - InferPlugin Used: OFF -- - Licensing: opensource -- Adding new sample: sample_uff_ssd -- - Parsers Used: uff -- - InferPlugin Used: ON -- - Licensing: opensource -- Adding new sample: trtexec -- - Parsers Used: caffe;uff;onnx -- - InferPlugin Used: ON -- - Licensing: opensource -- Configuring done -- Generating done -- Build files have been written to: /DATA/hhyang/tools/TensorRT/build ` Things seems going in the right direction. But when I run the below code, the error occured!
make nvinfer_plugin -j$(nproc) make[3]: No rule to make target 'nvinfer_LIB_PATH-NOTFOUND', needed by 'plugin/CMakeFiles/nvinfer_plugin.dir/cmake_device_link.o'. Stop. CMakeFiles/Makefile2:283: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/all' failed make[2]: [plugin/CMakeFiles/nvinfer_plugin.dir/all] Error 2 CMakeFiles/Makefile2:295: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/rule' failed make[1]: [plugin/CMakeFiles/nvinfer_plugin.dir/rule] Error 2 Makefile:238: recipe for target 'nvinfer_plugin' failed make: [nvinfer_plugin] Error 2
I can't figure out the reason. Ang suggestion will be appreciate! Thank you for your patient.