Closed agus4402 closed 9 months ago
Hi. I have this too (1660ti) with cuda. I have this error both in stable-diffustion-2.1 and stable-diffusion-2-depth. Anybody knows how to fix that?:
RuntimeError("OutOfMemoryError('CUDA out of memory. Tried to allocate 90.00 MiB (GPU 0; 6.00 GiB total capacity; 5.14 GiB already allocated; 0 bytes free; 5.24 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF')")
Hi. I have this too (1660ti) with cuda. I have this error both in stable-diffustion-2.1 and stable-diffusion-2-depth. Anybody knows how to fix that?:
RuntimeError("OutOfMemoryError('CUDA out of memory. Tried to allocate 90.00 MiB (GPU 0; 6.00 GiB total capacity; 5.14 GiB already allocated; 0 bytes free; 5.24 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF')")
I solved this problem by changing the CPU offload to Model
I hope this helps!
I have this error, too, on a 3070ti with 8GB RAM.
An error occurred while generating. Check the issues tab on GitHub to see if this has been reported before:
RuntimeError("OutOfMemoryError('CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 4.00 GiB total capacity; 3.40 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF')")
This issue is stale because it has been open for 60 days with no activity.
This issue was closed because it has been inactive for 7 days since being marked as stale.
Description
Hi, im getting this error with a gtx 1660. I hear that NVIDEA 16 series graphic cards are having some problems with stable diffusion. It have 6 vram, thats over the recomended so this shouldn't be happening.
This is the error
Is any way to set a similar configuration like this?
"--xformers --upcast-sampling --precision full --medvram --no-half-vae"
Hope theres a solution v:
Thanks!
Steps to Reproduce
Generate a texture at half precision
Expected Behavior
Generate a texture
Addon Version
Windows (CUDA)
GPU
NVIDIA 16 Series