Closed YAFU closed 2 years ago
Do you use any low vRAM options like --lowvram or --midvram? Without them it should not be possible to run the model on 4GB. I was only able to run the model on 512x640 on my 8GB VRAM card with default options.
Please look at the guides from parent repository: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#4gb-videocard-support https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Optimizations https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Run-with-Custom-Parameters
webui.bat and webui.sh from this repo should accept all options from AUTOMATIC1111's one.
Thank you very much, that did the trick. I have edited "webui-user.sh" file with the line:
export COMMANDLINE_ARGS="--lowvram"
And now python3 process when starting server just use about 400MiB of vRAM
Hi. Using Linux here. I have successfully installed the Krita plugin and all dependencies with ./webui.sh. Then wanting to use the plugin by running ./webui.sh in a terminal, when the server does initialize it takes 2.5Gb of vRAM without even having Krita open yet. My GPU has 4GB of vRAM, so then I get CUDA out of memory even with 200x200p images.
Is it correct that the server uses so much vRAM just by initializing itself?
When run ./webui.sh at the end shows:
From nvidia-smi python3 is using 2469MiB