anapnoe / stable-diffusion-webui-ux

Stable Diffusion web UI UX
GNU Affero General Public License v3.0
978 stars 59 forks source link

[Feature Request]: TensorRT support. #77

Open bropines opened 1 year ago

bropines commented 1 year ago

Is there an existing issue for this?

What would your feature do ?

Is it possible to add tensorRT like here: https://github.com/ddPn08/Lsmith ? If so, why not add it, since this method speeds up the generation on 20/30/40 series of video cards by 2 or more times

Proposed workflow

Add support TensorRT

Additional information

No response

anapnoe commented 1 year ago

Yes this is really cool I need to find some more time to look into thanks

midcoastal commented 1 year ago

I imagine that this would/may come around the same time any Vlad merging? Looking forward to this. I bought a 4070 TI, and, while it is doing better than my 1080 TI was, for sure, I know I am not getting the best out of it. Considering doing some compiling myself... But we know how that goes... ;-)

rushuna86 commented 1 year ago

tensorRT feature right now is very very limited generation or of use TBH. Automatic has an extension now that supports this feature, but the conversion process has a limit on "size" during conversion. And the TensorRT Unet file is restricted once again for generation, with the image size/batch that was "converted", lora doesn't work on it, and neither does hires. So unless you're generating as is images it's not very useful yet. Wait for nvidia release of the implementation.I went from 47-49its/sec up to 87its on a batch 4, performance isn't really 2x as hyped about.