Closed pinilpypinilpy closed 1 year ago
I can add it to the wiki, if someone wants to have a go at it feel free, otherwise it's on my list of things to document so I'll get round to it
Thanks!
I've got a GeForce GTX 1050 Ti (4GB), can run basujindal's Repo or it's built-in Gradio UI just fine, but even without the models for Face / Up-scaling in-place something about this Repo's stack pushes the VRAM requirements from 'workable' to not. I can just barely get the UI running with the tips in this issue, but can't generate a single iteration of an image. I've spent some time exploring the Dockerfile
s and .py
's to try to figure out what the difference is between Basujindal's txt2img_gradio.py
and this repo's webui.py
, but can't tell where the extra resource use is coming from, it's just not my area of expertise. Can't hurt to mention it though, and great work! I'm still glad this amazing repo is available.
I did some tests on my 3050 ti mobile, and found that you can run stable diffusion with the webui on 4gb vram. (barely. When generating images at 512x512, I'm less than 20mb from running out of memory)
It would be nice if you could add a small section to the readme for people starved of vram. There are a few steps needed to get it working.
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128
before the command, as well as --optimized to the end ( ex.PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128 python scripts/webui.py --optimized
). You could also edit the relauncher.ip a
on the host, then add a colon and the port number, ex. 192.168.X.XX:7860)