Closed yagelgen closed 2 years ago
I'm trying to run the YoloV4 (Demo 5) in TensorRt demos repo on AWS ec2.
I created ec2 VM with nvidia-gpu (with AMI - Amazon Linux 2 AMI with NVIDIA TESLA GPU Driver),
Amazon Linux 2 AMI with NVIDIA TESLA GPU Driver
which has: NVIDIA-SMI 450.119.01 Driver Version: 450.119.01 CUDA Version: 11.0.
NVIDIA-SMI 450.119.01 Driver Version: 450.119.01 CUDA Version: 11.0
On this EC2 I pulled and entered into the tensorrt official container, with:
sudo docker run --gpus all -it -v /home/ec2-user/player-detection:/home nvcr.io/nvidia/tensorrt:20.02-py3 bash
I did the following steps:
python3 -m pip install --upgrade setuptools pip
python3 -m pip install nvidia-pyindex
pip install nvidia-tensorrt
yolo/
pip3 install -r requirements.txt
pip3 install onnx==1.9.0
plugins/
make
./download_yolo.sh
python3 yolo_to_onnx.py -m yolov4
python3 onnx_to_tensorrt.py -m yolov4
I got the following error for the python3 onnx_to_tensorrt.py -m yolov4 command:
"RuntimeError: cannot get YoloLayer_TRT plugin creator"
From reading https://github.com/jkjung-avt/tensorrt_demos/issues/476 it seems that the problem is related to dynamic libaries.
I tried to view the libaries that I have, and got:
$ ldd libyolo_layer.so linux-vdso.so.1 (0x00007fff142a4000) libnvinfer.so.7 => /usr/lib/x86_64-linux-gnu/libnvinfer.so.7 (0x00007f9673734000) libcudart.so.11.0 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudart.so.11.0 (0x00007f96734af000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f9673126000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f9672f0e000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9672b1d000) libcudnn.so.8 => /usr/lib/x86_64-linux-gnu/libcudnn.so.8 (0x00007f96728f4000) libmyelin.so.1 => /usr/lib/x86_64-linux-gnu/libmyelin.so.1 (0x00007f9672074000) libnvrtc.so.11.1 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libnvrtc.so.11.1 (0x00007f966feac000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f966fca4000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f966faa0000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f966f702000) /lib64/ld-linux-x86-64.so.2 (0x00007f9699135000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f966f4e3000) libcublas.so.11 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcublas.so.11 (0x00007f9668008000) libcublasLt.so.11 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcublasLt.so.11 (0x00007f965a23e000)
It seems that I miss some, and also when I tried to print all the plugins, I didn't see the YoloLayer_TRT.
YoloLayer_TRT
Any idea how to solve it?
The solution was:
:21.10-py3
TENSORRT_INCS
/usr/include/x86_64-linux-gnu/NvInfer*
TENSORRT_LIBS
/usr/lib/x86_64-linux-gnu/libnvinfer*
Thanks for sharing. I think this would benefit others.
I'm trying to run the YoloV4 (Demo 5) in TensorRt demos repo on AWS ec2.
I created ec2 VM with nvidia-gpu (with AMI -
Amazon Linux 2 AMI with NVIDIA TESLA GPU Driver
),which has:
NVIDIA-SMI 450.119.01 Driver Version: 450.119.01 CUDA Version: 11.0
.On this EC2 I pulled and entered into the tensorrt official container, with:
sudo docker run --gpus all -it -v /home/ec2-user/player-detection:/home nvcr.io/nvidia/tensorrt:20.02-py3 bash
I did the following steps:
python3 -m pip install --upgrade setuptools pip
&&python3 -m pip install nvidia-pyindex
&&pip install nvidia-tensorrt
.yolo/
folder, I ran:pip3 install -r requirements.txt
.pip3 install onnx==1.9.0
.plugins/
folder, I ranmake
.yolo/
folder, I ran./download_yolo.sh
&&python3 yolo_to_onnx.py -m yolov4
&&python3 onnx_to_tensorrt.py -m yolov4
.I got the following error for the
python3 onnx_to_tensorrt.py -m yolov4
command:From reading https://github.com/jkjung-avt/tensorrt_demos/issues/476 it seems that the problem is related to dynamic libaries.
I tried to view the libaries that I have, and got:
It seems that I miss some, and also when I tried to print all the plugins, I didn't see the
YoloLayer_TRT
.Any idea how to solve it?