microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.13k stars 2.85k forks source link

Support for pip wheel tensorrt #9986

Open JaheimLee opened 2 years ago

JaheimLee commented 2 years ago

Is your feature request related to a problem? Please describe. I installed pip-wheel version of Tensorrt in my conda env followed this doc:. The installation command is:

python -m pip install nvidia-tensorrt==8.0.3.4

I also verified the python test command and there was no error. But when I create a session and set TensorrtExecutionProvider, it will raise:

2021-12-09 17:05:30.699233317 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:509 CreateExecutionProviderInstance] Failed to create TensorrtExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met.

I'm not sure whether I did something wrong or you don't support using tensorrt in this way. And if it's not supporting yet, is it easy to add this usage? System information

pommedeterresautee commented 2 years ago

Have you installed TensorRT from Nvidia repo first? (not the Python wrapper) https://developer.nvidia.com/nvidia-tensorrt-download

JaheimLee commented 2 years ago

Have you installed TensorRT from Nvidia repo first? (not the Python wrapper) https://developer.nvidia.com/nvidia-tensorrt-download

No. Here is a note in doc:

Note: While the TensorRT packages also contain pip wheel files, those wheel files require the rest of the .deb or .rpm packages to be installed and will not work alone. The standalone pip-installable TensorRT wheel files differ in that they are fully self-contained and installable without any prior TensorRT installation or use of .deb or .rpm files.

It means only pip wheel is enough.

jywu-msft commented 2 years ago

is this Linux or Windows? Are the tensorrt libraries in your LD_LIBRARY_PATH (on Linux) or PATH (on Windows) ?

JaheimLee commented 2 years ago

is this Linux or Windows? Are the tensorrt libraries in your LD_LIBRARY_PATH (on Linux) or PATH (on Windows) ?

It's Ubuntu 18.04. And there is no Tensorrt environment variable. My LD_LIBRARY_PATH was only set as:

export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

According to the doc, pip wheel file installation doesn't need to set LD_LIBRARY_PATH.

jywu-msft commented 2 years ago

can you add tensorrt libraries and cudnn libraries also to your LD_LIBRARY_PATH and try it?

JaheimLee commented 2 years ago

I tried tar file installation and set LD_LIBRARY_PATH. It worked. But here are another errors caused by Tensorrt:

getPluginCreator could not find plugin: NonZero version: 1
Error Code 9: Internal Error (Gather_1276: index to gather must be non-negative
Error Code 2: Internal Error (Builder failed while analyzing shapes.)

I noticed in your Markdown,you said "If some operators in the model are not supported by TensorRT, ONNX Runtime will partition the graph and only send supported subgraphs to TensorRT execution provider." I set both TensorrtEP and CudaEP, but the Builder failed. Am I missing something?

LeeJiangWei commented 2 years ago

@JaheimLee Hi, I'm facing exactly the same problem. How did you set your LD_LIBRARY_PATH and where are the installed tensorrt and cudnn libraries?

JaheimLee commented 2 years ago

@JaheimLee Hi, I'm facing exactly the same problem. How did you set your LD_LIBRARY_PATH and where are the installed tensorrt and cudnn libraries?

Just follow Cuda and Tensorrt official installation guide. In Linux, Cuda location is usually at '/usr/local/{YOUR CUDA FILE}' if you don't change the default setting in Cuda's .run file. And Tensorrt is more flexible. For me, I installed it at /data/xxx/TensorRT-x.x.x.x. So my LD_LIBRARY_PATH is:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/data/xxx/TensorRT-x.x.x.x/lib

I just add the command in my ~/.bashrc file.

symphonylyh commented 2 years ago

Hi, have you figured out the reason? I have set CUDA, cuDNN, TensorRT path in PATH and LD_LIBRARY_PATH as well, but it also gave me "Failed to create TensorrtExecutionProvider" and "Failed to create CUDAExecutionProvider" errors just as yours.

OmniscienceAcademy commented 2 years ago

I have the same errors

panchgonzalez commented 2 years ago

This worked for me

  1. Find out where your tensorrt pip wheel was installed with pip show nvidia-tensorrt
    Name: nvidia-tensorrt
    Version: 8.0.3.4
    Summary: A high performance deep learning inference library
    Home-page: UNKNOWN
    Author: NVIDIA
    Author-email: None
    License: Proprietary
    Location: /usr/local/lib/python3.8/dist-packages  <<HERE>>
    Requires: nvidia-cudnn, nvidia-cuda-nvrtc, nvidia-cuda-runtime, nvidia-cublas
    Required-by: 
  2. Add path to LD_LIBRARY_PATH
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/python3.8/dist-packages/tensorrt/
JaheimLee commented 2 years ago

This worked for me

  1. Find out where your tensorrt pip wheel was installed with pip show nvidia-tensorrt
Name: nvidia-tensorrt
Version: 8.0.3.4
Summary: A high performance deep learning inference library
Home-page: UNKNOWN
Author: NVIDIA
Author-email: None
License: Proprietary
Location: /usr/local/lib/python3.8/dist-packages  <<HERE>>
Requires: nvidia-cudnn, nvidia-cuda-nvrtc, nvidia-cuda-runtime, nvidia-cublas
Required-by: 
  1. Add path to LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/python3.8/dist-packages/tensorrt/

Yeah, but why not using tar file installation if you need add LD_LIBRARY_PATH

stentll commented 2 years ago

I'm having the same issue as OP and I'm very interested in some advice regarding this.

oza75 commented 3 days ago

This worked for me

  1. Find out where your tensorrt pip wheel was installed with pip show nvidia-tensorrt
Name: nvidia-tensorrt
Version: 8.0.3.4
Summary: A high performance deep learning inference library
Home-page: UNKNOWN
Author: NVIDIA
Author-email: None
License: Proprietary
Location: /usr/local/lib/python3.8/dist-packages  <<HERE>>
Requires: nvidia-cudnn, nvidia-cuda-nvrtc, nvidia-cuda-runtime, nvidia-cublas
Required-by: 
  1. Add path to LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/python3.8/dist-packages/tensorrt/

You may use export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/python3.10/dist-packages/tensorrt_libs/. The *.so files are inside that folder for tensorrt==10