isarsoft / yolov4-triton-tensorrt

This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server
http://www.isarsoft.com
Other
278 stars 64 forks source link

docker run error #16

Closed chiyukunpeng closed 3 years ago

chiyukunpeng commented 3 years ago

root@ubuntu:/home/cp/project/yolov5# docker run --gpus all --rm --shm-size=1g --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 -p8000:8000 -p8001:8001 -p8002:8002 -v$(pwd)/triton_deploy/models:/models -v$(pwd)/triton_deploy/plugins:/plugins --env LD_PRELOAD=/plugins/libmyplugins.so nvcr.io/nvidia/tritonserver:20.09-py3 tritonserver --model-repository=/models --strict-model-config=false --grpc-infer-allocation-pool-size=16 --log-verbose 1

/bin/bash: error while loading shared libraries: libcudart.so.10.1: cannot open shared object file: No such file or directory

philipp-schmidt commented 3 years ago

Make sure you built the engine files with the exact same tensorrt version as the version you are using with triton inference server.

So in this case use tensorrt 20.09 from the NGC repo.

Also make sure you fulfill the cuda version requirements for the triton version you are using. So your host pc needs to be able tonrun cuda 10.1.

chiyukunpeng commented 3 years ago

Thanks!