Closed deepkyu closed 9 months ago
@deepkyu that plugin is outdated, there is a newer plugin here: https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT
but it requires TRT 8.6 minimum (which will be coming in JetPack 6 - so stay tuned)
@deepkyu that plugin is outdated, there is a newer plugin here: https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT
but it requires TRT 8.6 minimum (which will be coming in JetPack 6 - so stay tuned)
Yeah I used your links from commits to install cudnn 8.9 and tensorrt 8.6 . The problem is that need updated version of glib, that is very important in so, so I need ubuntu 22.04 to run it… I’m very hype for the launch of jetpack6.
@johnnynunez did you use the ARM SBSA installers, and did they actually work to run TensorRT?
@johnnynunez did you use the ARM SBSA installers, and did they actually work to run TensorRT?
It doesn't work because when importing tensorrt it asks to use a newer glib library. I tried to update it but it corrupts the system. I understand that partners already have access under NDA. but cudnn works fine
@dusty-nv I use miniconda arm with python 3.11 to use and compile all libraries. Whisper with new cuda and cudnn works fine
@deepkyu that plugin is outdated, there is a newer plugin here: https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT
but it requires TRT 8.6 minimum (which will be coming in JetPack 6 - so stay tuned)
Hi @dusty-nv ,
Thank you for your response. I expected that there would be some ways to use TensorRT Stable Diffusion inference with the current JetPack, but I'll wait for JetPack 6 :)
As you mentioned, I have tested converting models with both TensorRT extensions in my personal GPU (RTX 3090):
9.0.1.post11.dev4
8.6.1.6
Thanks to you and Jetson Gen AI Lab, I also finished testing pytorch + xformer inference on Jetson AGX Orin (32GB) for the compressed Stable Diffusion model from my team.
I respect you a lot, and I'll wait for the next release of JetPack with the more recent TensorRT.
I'll close this issue.
Hi,
First of all, thank you for this incredible repository.
Following the docs, I tried to deploy and run
stable-diffusion-webui
on my AGX Orin device. I succeeded in inferring with torch checkpoint with xformer.But, I got an error when I converted the safetensors checkpoint with TensorRT extension with
automatic1111/stable-diffusion-webui-tensorrt
.https://github.com/dusty-nv/jetson-containers/blob/7f2a9dcc116ccf14e5b95776019e0a10cddf9336/packages/diffusion/stable-diffusion-webui/README.md?plain=1#L9
I found out that the current version of TensorRT from JetPack 5.1.2 is
[8.5.2](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-5-2)
, and this version doesn't support convertingLayerNormalization
operator. I tested with the same version of TensorRT on my desktop GPU, but it failed to convert to.trt
checkpoint for the same reason.Could you let me know how you convert the Stable Diffusion checkpoint into TRT and infer with Jetson AGX Orin?
Thanks in advance.