CompVis / stable-diffusion

A latent text-to-image diffusion model
https://ommer-lab.com/research/latent-diffusion-models/
Other
66.54k stars 9.97k forks source link

Error: CUDA out of memory #782

Open Soosaaas opened 11 months ago

Soosaaas commented 11 months ago

When editing images with a resolution higher than 512x512 (in some cases, only 300x300 is possible depending on the model), I encounter the following error message:

"CUDA out of memory. Tried to allocate 19.80 GiB (GPU 0; 8.00 GiB total capacity; 2.65 GiB already allocated; 3.23 GiB free; 2.68 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF."

I have already found some posts about this, but they do not help me because I do not understand how to adjust settings in Python. How can I use a resolution like 2048x2048 with my 8 GiB GPU memory? I wouldn't mind having longer loading times, but it doesn't even get to that point. I am using StableDiffusion with automatic 1111 and the model is sd_xl.