basujindal / stable-diffusion

Optimized Stable Diffusion modified to run on lower GPU VRAM
Other
3.14k stars 469 forks source link

GTX 1660 SUPER (6GB), running with --precision-full still yields out of memory error. #145

Closed Zenahr closed 2 years ago

Zenahr commented 2 years ago

I ran python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms --n_samples=1 --precision=full

Error message:

RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 6.00 GiB total capacity; 5.06 GiB already allocated; 0 bytes free; 5.17 GiB reserved in total by PyTorch) If reserved memory is 
>> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Is there a way to have PyTorch not allocate almost all of my VRAM? I'm guessing that's the problem here.

Zenahr commented 2 years ago

nvm I ran the standard script.