Open wizz13150 opened 1 year ago
The only thing Still ToDo (as i can tell) :
export_onnx.py
and 'attention optimization' desactivation and seems not related to my commits.Not sure yet how to take this one, tho. 🤨🤔I won't touch this, I just set 'None' as 'SD Unet' setting, to un-link during conversion.
I can see people here and there able to convert to trt using 640*640 at batch 2, i don't understand this, as it exceed the limit for me. If anyone have an answer for this. Example, using a rtx4070ti, 12gb vram, result = 32.3 it/s : https://www.youtube.com/watch?v=bT7SaMkgNEY&t=505s
Hey,
Repost of the previous messy PRs.
A batch mode would be appreciated.
Obviously not a python expert, lol.
Convert everything in 5-8 clicks:
This way, using default folders, it will convert all the files in the 'Stable-Diffusion' folder to .onnx in the 'Unet-onnx' folder, and then automatically start to convert to .trt in the 'Unet_trt' folder. Kind of queued. And Voilà ! The 'Unet-onnx' and 'Unet-trt' folders are now populated with all the available .ckpt and .safetensors models in the 'Stable-Diffusion' folder.
Screen for the 'Convert to ONNX' tab :
Screen for the 'Convert ONNX to TensorRT' tab :
I mean, it worked for me, lol.
Cheers ! 🥂