Open Nightnightlight opened 10 months ago
Think this is the same for SD1.5 ONNX/TRT models as of the 0.2.0 release of the NVidia TensorRT extension in Automatic1111. I'm not sure what breaking bugs were introduced, but it's been a pretty painful release.
See these two issues with similar results: [(https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/230)], [(https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/248)]
I'm using sd1.5 model and having same problem. No matter what image resolution I set, like static 768x768 or dynamic 512~1024 with opt=768, it always says Enabling PyTorch fallback as no engine was found
and use normal way to generate. However, default mode can work. That's really strange. My picture output setting remains unchanged at 768x768.
I've tried a couple of arguments and find that it may caused by max batch size 4 which I changed to 1 before and opt token 75 which I changed to 150 before. After keeping those two arguments as default I can change resolution to 768 and 1024 while TensorRT works fine.
I'm using sd1.5 model and having same problem. No matter what image resolution I set, like static 768x768 or dynamic 512~1024 with opt=768, it always says
Enabling PyTorch fallback as no engine was found
and use normal way to generate. However, default mode can work. That's really strange. My picture output setting remains unchanged at 768x768.I've tried a couple of arguments and find that it may caused by max batch size 4 which I changed to 1 before and opt token 75 which I changed to 150 before. After keeping those two arguments as default I can change resolution to 768 and 1024 while TensorRT works fine.
so max batch size should be 4 and opt token must be 75?
I'm using sd1.5 model and having same problem. No matter what image resolution I set, like static 768x768 or dynamic 512~1024 with opt=768, it always says
Enabling PyTorch fallback as no engine was found
and use normal way to generate. However, default mode can work. That's really strange. My picture output setting remains unchanged at 768x768. I've tried a couple of arguments and find that it may caused by max batch size 4 which I changed to 1 before and opt token 75 which I changed to 150 before. After keeping those two arguments as default I can change resolution to 768 and 1024 while TensorRT works fine.so max batch size should be 4 and opt token must be 75?
That's a speculation. I didn't try each combination cause that's too time wasting. Maybe problem is one of them or some bad combinations of them. Anyway, if you are not curious about which combination is fine and just want to have a larger scale, I recommend 512, 768, 960, 1024, 1152, 1536, these are tested by me which are nice multiples of 64, can generate better picture I feel, and easy to pass through.
I'm using sd1.5 model and having same problem. No matter what image resolution I set, like static 768x768 or dynamic 512~1024 with opt=768, it always says
Enabling PyTorch fallback as no engine was found
and use normal way to generate. However, default mode can work. That's really strange. My picture output setting remains unchanged at 768x768. I've tried a couple of arguments and find that it may caused by max batch size 4 which I changed to 1 before and opt token 75 which I changed to 150 before. After keeping those two arguments as default I can change resolution to 768 and 1024 while TensorRT works fine.so max batch size should be 4 and opt token must be 75?
That's a speculation. I didn't try each combination cause that's too time wasting. Maybe problem is one of them or some bad combinations of them. Anyway, if you are not curious about which combination is fine and just want to have a larger scale, I recommend 512, 768, 960, 1024, 1152, 1536, these are tested by me which are nice multiples of 64, can generate better picture I feel, and easy to pass through.
batch size 4 and opt token 75 also worked for me. definitely a bug going on
same problem
is there any update to this?
I'm also having this issue as well.
Warning Enabling PyTorch fallback as no engine was found.
same here. still no answer from NVIDIA??
This issue exists since months. Anyone please have a look into it? @NVIDIA @contentis @w-e-w @Rudra-Ji @andrewtvuong @AetherMagee @Zn10plays @shinshin86 @MorkTheOrk @eltociear @Milly
@hellomrmarky they only make advertisement and they don't even give 1 person to look after for a repo
and they are trillion dollars company with total monopoly and bad practices at the moment
god damnit NVidia really is bullshit. They have the means, they advertise this as perk to buying one over their overpriced consumer GPUs and CSE and then just dead end this shit, when there is real potential. when it works it works soooo good! COME ON!!!!!!!!!!!!!!!!
It's a shame no one feels the urge or need to fix this issue. Bumping this again... @NVIDIA @contentis @w-e-w @Rudra-Ji @andrewtvuong @AetherMagee @Zn10plays @shinshin86 @MorkTheOrk @eltociear @Milly
can you please stop pinging every single contributor to this repo ever
tensorrt is pretty much dead, and there is no point in disturbing unrelated people who can't do much about it
It's doesn't appear to be dead. I just read an article that it had recently been implemented in comfyUI to work with Flux. I wish it wasn't such an aweful unnecessary UI. How did that design ever take off? It reminds me of those shitty coding block IDEs.
On Fri, Nov 8, 2024, 12:31 PM Aether @.***> wrote:
can you please stop pinging every single contributor to this repo ever
tensorrt is pretty much dead, and there is no point in disturbing unrelated people who can't do much about it
— Reply to this email directly, view it on GitHub https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/254#issuecomment-2465498516, or unsubscribe https://github.com/notifications/unsubscribe-auth/AWKY57SJ7U4N577URDLZSJDZ7T7PXAVCNFSM6AAAAABCBIKVVWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINRVGQ4TQNJRGY . You are receiving this because you commented.Message ID: @.***>
swarmui is a good alternative, it has comfyui backend w/ a automatik ui. been wondering the same thing w/ the engines
Just keeps saying "Enabling PyTorch fallback as no engine was found" when I try and use a TensorRT engine for a SDXL model. Tried reinstalling the extension, deleting venv and starting over and rebuilding the engines several times. Its SDXL 1024x1024 min and max, 1 batch size min/max, 75 min and 750 max prompt length. Every extension is turned off except for TensorRT. SD Unet is set to automatic though I also tried selecting the model itself which still did not work. Every other setting is default on a fresh automatic1111 install.
Note: After much testing it seems like TensorRT for SDXL simply can not support higher than a 75 token max period. If you make one with a higher max token count than the default 75 it refuses to use that model.