High-Resolution Image Synthesis with Latent Diffusion Models
MIT License
37.84k
stars
4.88k
forks
source link
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 9.49 GiB. GPU 0 has a total capacty of 23.68 GiB of which 8.03 GiB is free. Including non-PyTorch memory, this process has 15.17 GiB memory in use. Of the allocated memory 14.73 GiB is allocated by PyTorch, and 133.53 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF #348
when i ran python scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt model_files/v2-1_768-ema-pruned.ckpt --config configs/stable-diffusion/v2-inference-v.yaml --H 768 --W 768, it is giving out-of-memory error.
I had 3090 system with 24GB GPU, how much memory is required to do inference?
when i ran python scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt model_files/v2-1_768-ema-pruned.ckpt --config configs/stable-diffusion/v2-inference-v.yaml --H 768 --W 768, it is giving out-of-memory error.
I had 3090 system with 24GB GPU, how much memory is required to do inference?