Open JaheimLee opened 2 years ago
Have you installed TensorRT from Nvidia repo first? (not the Python wrapper) https://developer.nvidia.com/nvidia-tensorrt-download
Have you installed TensorRT from Nvidia repo first? (not the Python wrapper) https://developer.nvidia.com/nvidia-tensorrt-download
No. Here is a note in doc:
Note: While the TensorRT packages also contain pip wheel files, those wheel files require the rest of the .deb or .rpm packages to be installed and will not work alone. The standalone pip-installable TensorRT wheel files differ in that they are fully self-contained and installable without any prior TensorRT installation or use of .deb or .rpm files.
It means only pip wheel is enough.
is this Linux or Windows? Are the tensorrt libraries in your LD_LIBRARY_PATH (on Linux) or PATH (on Windows) ?
is this Linux or Windows? Are the tensorrt libraries in your LD_LIBRARY_PATH (on Linux) or PATH (on Windows) ?
It's Ubuntu 18.04. And there is no Tensorrt environment variable. My LD_LIBRARY_PATH was only set as:
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
According to the doc, pip wheel file installation doesn't need to set LD_LIBRARY_PATH.
can you add tensorrt libraries and cudnn libraries also to your LD_LIBRARY_PATH and try it?
I tried tar file installation and set LD_LIBRARY_PATH. It worked. But here are another errors caused by Tensorrt:
getPluginCreator could not find plugin: NonZero version: 1
Error Code 9: Internal Error (Gather_1276: index to gather must be non-negative
Error Code 2: Internal Error (Builder failed while analyzing shapes.)
I noticed in your Markdown,you said "If some operators in the model are not supported by TensorRT, ONNX Runtime will partition the graph and only send supported subgraphs to TensorRT execution provider." I set both TensorrtEP and CudaEP, but the Builder failed. Am I missing something?
@JaheimLee Hi, I'm facing exactly the same problem. How did you set your LD_LIBRARY_PATH and where are the installed tensorrt and cudnn libraries?
@JaheimLee Hi, I'm facing exactly the same problem. How did you set your LD_LIBRARY_PATH and where are the installed tensorrt and cudnn libraries?
Just follow Cuda and Tensorrt official installation guide. In Linux, Cuda location is usually at '/usr/local/{YOUR CUDA FILE}' if you don't change the default setting in Cuda's .run file. And Tensorrt is more flexible. For me, I installed it at /data/xxx/TensorRT-x.x.x.x. So my LD_LIBRARY_PATH is:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/data/xxx/TensorRT-x.x.x.x/lib
I just add the command in my ~/.bashrc file.
Hi, have you figured out the reason? I have set CUDA, cuDNN, TensorRT path in PATH and LD_LIBRARY_PATH as well, but it also gave me "Failed to create TensorrtExecutionProvider" and "Failed to create CUDAExecutionProvider" errors just as yours.
I have the same errors
This worked for me
tensorrt
pip wheel was installed with pip show nvidia-tensorrt
Name: nvidia-tensorrt
Version: 8.0.3.4
Summary: A high performance deep learning inference library
Home-page: UNKNOWN
Author: NVIDIA
Author-email: None
License: Proprietary
Location: /usr/local/lib/python3.8/dist-packages <<HERE>>
Requires: nvidia-cudnn, nvidia-cuda-nvrtc, nvidia-cuda-runtime, nvidia-cublas
Required-by:
LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/python3.8/dist-packages/tensorrt/
This worked for me
- Find out where your
tensorrt
pip wheel was installed withpip show nvidia-tensorrt
Name: nvidia-tensorrt Version: 8.0.3.4 Summary: A high performance deep learning inference library Home-page: UNKNOWN Author: NVIDIA Author-email: None License: Proprietary Location: /usr/local/lib/python3.8/dist-packages <<HERE>> Requires: nvidia-cudnn, nvidia-cuda-nvrtc, nvidia-cuda-runtime, nvidia-cublas Required-by:
- Add path to
LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/python3.8/dist-packages/tensorrt/
Yeah, but why not using tar file installation if you need add LD_LIBRARY_PATH
I'm having the same issue as OP and I'm very interested in some advice regarding this.
This worked for me
- Find out where your
tensorrt
pip wheel was installed withpip show nvidia-tensorrt
Name: nvidia-tensorrt Version: 8.0.3.4 Summary: A high performance deep learning inference library Home-page: UNKNOWN Author: NVIDIA Author-email: None License: Proprietary Location: /usr/local/lib/python3.8/dist-packages <<HERE>> Requires: nvidia-cudnn, nvidia-cuda-nvrtc, nvidia-cuda-runtime, nvidia-cublas Required-by:
- Add path to
LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/python3.8/dist-packages/tensorrt/
You may use export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/python3.10/dist-packages/tensorrt_libs/
. The *.so
files are inside that folder for tensorrt==10
Is your feature request related to a problem? Please describe. I installed pip-wheel version of Tensorrt in my conda env followed this doc:. The installation command is:
I also verified the python test command and there was no error. But when I create a session and set TensorrtExecutionProvider, it will raise:
I'm not sure whether I did something wrong or you don't support using tensorrt in this way. And if it's not supporting yet, is it easy to add this usage? System information