I am using a LORA model to generate a single image with 512x704 resolution and hires.fix 8x_NMKD-Faces_160000G upscale by 2.2 to a final 1126x1548 resoultion image.
I keep getting out of memory when doing so and this text in the command window.
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 21.76 GiB (GPU 0; 12.00 GiB total capacity; 2.37 GiB already allocated; 7.40 GiB free; 2.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Does it really require +20GB of VRAM to run that upscaler to upscale a 512x704 image by 2.2 times? Or am I reading it wrong? I don't get what the problem is or how to fix it.
I am using a LORA model to generate a single image with 512x704 resolution and hires.fix 8x_NMKD-Faces_160000G upscale by 2.2 to a final 1126x1548 resoultion image.
I keep getting out of memory when doing so and this text in the command window.
return _VF.einsum(equation, operands) # type: ignore[attr-defined] torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 21.76 GiB (GPU 0; 12.00 GiB total capacity; 2.37 GiB already allocated; 7.40 GiB free; 2.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Does it really require +20GB of VRAM to run that upscaler to upscale a 512x704 image by 2.2 times? Or am I reading it wrong? I don't get what the problem is or how to fix it.