AUTOMATIC1111 / stable-diffusion-webui-tensorrt

MIT License
310 stars 20 forks source link

Anyone managed to setup/use on linux? #43

Closed yuvraj108c closed 1 year ago

yuvraj108c commented 1 year ago

Getting error while trying to convert onnx model to trt:

Unable to open library: libnvinfer_plugin.so.8 due to libcublas.so.11: cannot open shared object file: No such file or directory

Full trace:

&&&& FAILED TensorRT.trtexec [TensorRT v8601] # /home/user/stable-diffusion-webui/extensions/stable-diffusion-webui-tensorrt/TensorRT-8.6.1.6/bin/trtexec --onnx=models/Unet-onnx/v1-5-pruned-emaonly.onnx --saveEngine=/home/user/stable-diffusion-webui/models/Unet-trt/v1-5-pruned-emaonly.trt --minShapes=x:2x4x64x64,context:2x77x768,timesteps:2 --maxShapes=x:2x4x64x64,context:2x77x768,timesteps:2 --fp16 Error completing request Arguments: ('', 'models/Unet-onnx/v1-5-pruned-emaonly.onnx', 1, 1, 75, 75, 512, 512, 512, 512, True, '') {} Traceback (most recent call last): File "/home/user/stable-diffusion-webui/modules/call_queue.py", line 55, in f res = list(func(*args, *kwargs)) File "/home/user/stable-diffusion-webui/modules/call_queue.py", line 35, in f res = func(args, **kwargs) File "/home/user/stable-diffusion-webui/extensions/stable-diffusion-webui-tensorrt/ui_trt.py", line 69, in convert_onnx_to_trt launch.run(command, live=True) File "/home/user/stable-diffusion-webui/modules/launch_utils.py", line 107, in run raise RuntimeError("\n".join(error_bits)) RuntimeError: Error running command. Command: "/home/user/stable-diffusion-webui/extensions/stable-diffusion-webui-tensorrt/TensorRT-8.6.1.6/bin/trtexec" --onnx="models/Unet-onnx/v1-5-pruned-emaonly.onnx" --saveEngine="/home/user/stable-diffusion-webui/models/Unet-trt/v1-5-pruned-emaonly.trt" --minShapes=x:2x4x64x64,context:2x77x768,timesteps:2 --maxShapes=x:2x4x64x64,context:2x77x768,timesteps:2 --fp16 Error code: 1

CyberTimon commented 1 year ago

I get the exact same issues. I'm on linux ubuntu 22.04.

CyberTimon commented 1 year ago

I fixed the issue, use sudo apt install nvidia-cudnn Hope it helps!

yuvraj108c commented 1 year ago

When i run sudo apt install nvidia-cudnn, getting E: Unable to locate package nvidia-cudnn I used instead pip install nvidia-cudnn, and it installed properly

However, my issue still persists while converting model from onnx to trt

WudiJoey commented 1 year ago

run vi ~/.bashrc and then add export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/user/stable-diffusion-webui/extensions/stable-diffusion-webui-tensorrt/TensorRT-8.6.1.6/lib to ~/.bashrc finally run source ~/.bashrc

yuvraj108c commented 1 year ago

@WudiJoey still getting same error 😔

WudiJoey commented 1 year ago

您好,您的来信我已经收到,会尽快回复。  

yuvraj108c commented 1 year ago

Managed to get it work using those commands sudo apt install nvidia-cudnn export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/user/stable-diffusion-webui/extensions/stable-diffusion-webui-tensorrt/TensorRT-8.6.1.6/lib

ruler88 commented 11 months ago

Still not working for me :( I got the LD_LIBRARY_PATH set to the correct directory where TensorRT is extracted. And the 2nd command shows that the directory has the libnvinfer_plugin.so.8 file

Also ran sudo apt install nvidia-cudnn successfully. I'm on ubuntu 22.04

user@ee696f21-2097-470e-9ab2-ce6cef01eaff:~/stable-diffusion-webui$ echo $LD_LIBRARY_PATH
/home/user/stable-diffusion-webui/extensions/stable-diffusion-webui-tensorrt/TensorRT-8.6.1.6/lib:/usr/local/cuda/lib64:
user@ee696f21-2097-470e-9ab2-ce6cef01eaff:~/stable-diffusion-webui$ ls /home/user/stable-diffusion-webui/extensions/stable-diffusion-webui-tensorrt/TensorRT-8.6.1.6/lib
libnvcaffe_parser.a         libnvinfer.so.8.6.1                   libnvinfer_lean.so        libnvinfer_plugin.so.8.6.1     libnvinfer_vc_plugin_static.a  libnvparsers.so.8
libnvcaffe_parser.so        libnvinfer_builder_resource.so.8.6.1  libnvinfer_lean.so.8      libnvinfer_plugin_static.a     libnvonnxparser.so             libnvparsers.so.8.6.1
libnvcaffe_parser.so.8      libnvinfer_dispatch.so                libnvinfer_lean.so.8.6.1  libnvinfer_static.a            libnvonnxparser.so.8           libnvparsers_static.a
libnvcaffe_parser.so.8.6.1  libnvinfer_dispatch.so.8              libnvinfer_lean_static.a  libnvinfer_vc_plugin.so        libnvonnxparser.so.8.6.1       libonnx_proto.a
libnvinfer.so               libnvinfer_dispatch.so.8.6.1          libnvinfer_plugin.so      libnvinfer_vc_plugin.so.8      libnvonnxparser_static.a       stubs
libnvinfer.so.8             libnvinfer_dispatch_static.a          libnvinfer_plugin.so.8    libnvinfer_vc_plugin.so.8.6.1  libnvparsers.so

The full error message I'm getting is this:

Uncaught exception detected: Unable to open library: libnvinfer_plugin.so.8 due to libcublas.so.12: cannot open shared object file: No such file or directory
&&&& FAILED TensorRT.trtexec [TensorRT v8601] # /home/user/stable-diffusion-webui/extensions/stable-diffusion-webui-tensorrt/TensorRT-8.6.1.6/bin/trtexec --onnx=models/Unet-onnx/realisticVisionV51_v51VAE.onnx --saveEngine=/home/user/stable-diffusion-webui/models/Unet-trt/realisticVisionV51_v51VAE.trt --minShapes=x:2x4x64x64,context:2x77x768,timesteps:2 --maxShapes=x:2x4x64x64,context:2x77x768,timesteps:2 --fp16