lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.15k stars 803 forks source link

torch.OutOfMemoryError: Allocation on device #1991

Open byteconcepts opened 3 weeks ago

byteconcepts commented 3 weeks ago

I'm desperate: Crashes every second generation on a 4090 with 24G , sometimes even the pycache files are corrupted and have all to be deleted. Sometimes even Segfaults occur.

Please return back to a state where a "creating", initiated by a click on the button, starts a creation run with just cleared (cuda) GPU Memory, respecting the GUI entry for "GPU Weights (MB)".

After that generation all used memory, CPU and GPU will be cleared again !!!

This enables us users that also use for example ollama to use it to enhance prompts etc. in the meanwhile.

Imho this is rule nr 1 for AI programs to be run on consumers boxes: Always clear the memory you used for one task after that completely! to let the user also use other local AI Programs or Models that may also need to use many GPU resources.

Please.... I simply do not want to use comfy, I don't like it's interface

DenOfEquity commented 2 weeks ago

Have you tried the command-line argument --always-offload-from-vram?