basujindal / stable-diffusion

Optimized Stable Diffusion modified to run on lower GPU VRAM
Other
3.14k stars 467 forks source link

about inference time #214

Open xlc-github opened 1 year ago

xlc-github commented 1 year ago

hi,thank your for your code,I have test your optimized_txt2img.py,the inference time is indeed about 24-26 sec per image.does the inference time can decrease to 14-16sec if the sd model has been fragmented into two parts,if so,how can I do this.Thank you again!