bbaaii / DreamDiffusion

Implementation of “DreamDiffusion: Generating High-Quality Images from Brain EEG Signals”
MIT License
429 stars 49 forks source link

CUDA out of memory #24

Open Paoloc99 opened 5 months ago

Paoloc99 commented 5 months ago

Hi, I'm trying to run your implementation locally but I'm facing this issue during the fine-tuning phase:

RuntimeError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 8.00 GiB total capacity; 10.16 GiB already allocated; 0 bytes free; 13.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Does anyone know how can I fix this? I have an RTX 3060 Ti 8GB, i5 11400, and 16GB RAM DDR4. Thank you in advance for the help and good job with this project!!

Cristian-Fioravanti commented 5 months ago

i also have the same problem

taziksh commented 5 months ago

What's your batch size? Reducing batch size can help if your VRAM is low

Cristian-Fioravanti commented 5 months ago

batch size = 4

Paoloc99 commented 5 months ago

I have also tested with batch size 2 receiving the same error. Can reducing image sizes help?

taziksh commented 5 months ago

1) Is this pre-training or finetuning? 2) How much VRAM is in your GPU?

Paoloc99 commented 5 months ago

This happens during finetuning. My GPU is 8GB VRAM

taziksh commented 5 months ago

You can try setting batch_size even lower (1), but the main issue is the low VRAM. Try running it on a cloud service (e.g. Runpod, Colab).

Paoloc99 commented 5 months ago

I've already tried to work with 1 as batch_size but it gives me another type of error. Is there a way to reduce the dimension of the images which dreamdiffusion works with? I saw that it uses 512x512 images and uses Stable Diffusion which evaluates 512x512 images aswell. Maybe reducing these sizes the computation will be lighter