Open Paoloc99 opened 5 months ago
i also have the same problem
What's your batch size? Reducing batch size can help if your VRAM is low
batch size = 4
I have also tested with batch size 2 receiving the same error. Can reducing image sizes help?
1) Is this pre-training or finetuning? 2) How much VRAM is in your GPU?
This happens during finetuning. My GPU is 8GB VRAM
You can try setting batch_size even lower (1), but the main issue is the low VRAM. Try running it on a cloud service (e.g. Runpod, Colab).
I've already tried to work with 1 as batch_size but it gives me another type of error. Is there a way to reduce the dimension of the images which dreamdiffusion works with? I saw that it uses 512x512 images and uses Stable Diffusion which evaluates 512x512 images aswell. Maybe reducing these sizes the computation will be lighter
Hi, I'm trying to run your implementation locally but I'm facing this issue during the fine-tuning phase:
RuntimeError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 8.00 GiB total capacity; 10.16 GiB already allocated; 0 bytes free; 13.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Does anyone know how can I fix this? I have an RTX 3060 Ti 8GB, i5 11400, and 16GB RAM DDR4. Thank you in advance for the help and good job with this project!!