lucidrains / big-sleep

A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Technique was originally created by https://twitter.com/advadnoun
MIT License
2.57k stars 304 forks source link

RuntimeError: CUDA out of memory. Tried to allocate xxx MiB #99

Open TheZipCreator opened 3 years ago

TheZipCreator commented 3 years ago

Whenever I try to use "dream" from the commandline, I get the message: RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 2.00 GiB total capacity; 747.98 MiB already allocated; 0 bytes free; 778.00 MiB reserved in total by PyTorch). I don't know much about pytorch, so is this fixable or do I just not have enough memory to use this?

wolfgangmeyers commented 3 years ago

With default settings, Big Sleep takes up close to 8GB of dedicated video ram. Some things you can try to lower the memory footprint:

If you want to check how much VRAM you have and you are using windows 10, you can follow an online guide like this one. I think that --num-cutouts=16 --image-size=128 should lower the memory requirements to around 4GB.

Hope that helps

areyougood commented 3 years ago

can i use shared gpu memory instead of dedicated?

sgtnasty commented 2 years ago

I have the same issue, but I am wondering if it is not using the correct GPU. I have a laptop with Optimus.

Graphics:
  Device-1: Intel CoffeeLake-H GT2 [UHD Graphics 630] driver: i915 v: kernel
  Device-2: NVIDIA GP107M [GeForce GTX 1050 Ti Mobile] driver: nvidia v: 510.47.03

because of this output:

RuntimeError: CUDA out of memory. Tried to allocate 38.00 MiB (GPU 0; 3.95 GiB total capacity; 3.18 GiB already allocated; 31.69 MiB free; 3.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
sgtnasty commented 2 years ago

I found this, but unsure how to tell big_sleep how to select a CUDA device. https://pytorch.org/docs/stable/notes/cuda.html

cuda = torch.device('cuda')     # Default CUDA device
cuda0 = torch.device('cuda:0')
cuda2 = torch.device('cuda:2')  # GPU 2 (these are 0-indexed)