camenduru / Fooocus-colab

186 stars 54 forks source link

out of cuda memory error #4

Open KorontosTheThird opened 1 year ago

KorontosTheThird commented 1 year ago

after doing a few generations , the generation process stops and stays stuck like that while on colab an error appears :

torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 14.10 GiB Requested : 19.69 MiB Device limit : 14.75 GiB Free (according to CUDA): 2.81 MiB PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB

CamoCamoCamo commented 1 year ago

same with me, in sdxl colab:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 14.75 GiB total capacity; 14.18 GiB already allocated; 832.00 KiB free; 14.61 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

KorontosTheThird commented 1 year ago

same with me, in sdxl colab:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 14.75 GiB total capacity; 14.18 GiB already allocated; 832.00 KiB free; 14.61 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@CamoCamoCamo use the fooocus-MRE fork colab , it works flawlessly https://colab.research.google.com/github/MoonRide303/Fooocus-MRE/blob/moonride-main/colab.ipynb