carson-katri / dream-textures

Stable Diffusion built-in to Blender
GNU General Public License v3.0
7.78k stars 419 forks source link

OutOfMemoryError #691

Closed agus4402 closed 9 months ago

agus4402 commented 1 year ago

Description

Hi, im getting this error with a gtx 1660. I hear that NVIDEA 16 series graphic cards are having some problems with stable diffusion. It have 6 vram, thats over the recomended so this shouldn't be happening.

This is the error image

Is any way to set a similar configuration like this?

"--xformers --upcast-sampling --precision full --medvram --no-half-vae"

Hope theres a solution v:

Thanks!

Steps to Reproduce

Generate a texture at half precision

Expected Behavior

Generate a texture

Addon Version

Windows (CUDA)

GPU

NVIDIA 16 Series

mercouriu commented 1 year ago

Hi. I have this too (1660ti) with cuda. I have this error both in stable-diffustion-2.1 and stable-diffusion-2-depth. Anybody knows how to fix that?:

RuntimeError("OutOfMemoryError('CUDA out of memory. Tried to allocate 90.00 MiB (GPU 0; 6.00 GiB total capacity; 5.14 GiB already allocated; 0 bytes free; 5.24 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF')")

agus4402 commented 1 year ago

Hi. I have this too (1660ti) with cuda. I have this error both in stable-diffustion-2.1 and stable-diffusion-2-depth. Anybody knows how to fix that?:

RuntimeError("OutOfMemoryError('CUDA out of memory. Tried to allocate 90.00 MiB (GPU 0; 6.00 GiB total capacity; 5.14 GiB already allocated; 0 bytes free; 5.24 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF')")

I solved this problem by changing the CPU offload to Model image

I hope this helps!

EdgeCaseLord commented 1 year ago

I have this error, too, on a 3070ti with 8GB RAM.

ddxl123 commented 11 months ago

An error occurred while generating. Check the issues tab on GitHub to see if this has been reported before:

RuntimeError("OutOfMemoryError('CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 4.00 GiB total capacity; 3.40 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF')")

github-actions[bot] commented 9 months ago

This issue is stale because it has been open for 60 days with no activity.

github-actions[bot] commented 9 months ago

This issue was closed because it has been inactive for 7 days since being marked as stale.