Doubiiu / DynamiCrafter

[ECCV 2024] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors
Apache License 2.0
2.09k stars 165 forks source link

CUDA out of memory on rtx 4090 #20

Open protector131090 opened 5 months ago

protector131090 commented 5 months ago

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 25.31 GiB (GPU 0; 23.99 GiB total capacity; 11.33 GiB already allocated; 9.91 GiB free; 12.47 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Doubiiu commented 5 months ago

Which model did you try? For 512 model, I have added perframe_ae=True into the config as pointed out by Issue #18, where there should be no OOM problem for the 512 model.

protector131090 commented 5 months ago

Which model did you try? For 512 model, I have added perframe_ae=True into the config as pointed out by Issue #18, where there should be no OOM problem for the 512 model.

Yes 512 words fine. how much vram 1024 model needs?

Doubiiu commented 5 months ago

seems 18.3 GB from twitter.

protector131090 commented 5 months ago

seems 18.3 GB from twitter.

why cant i launch on 4090 then?

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 25.31 GiB (GPU 0; 23.99 GiB total capacity; 38.13 GiB already allocated; 0 bytes free; 39.88 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

FilippFedotchenko commented 4 months ago

seems 18.3 GB from twitter.

why cant i launch on 4090 then?

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 25.31 GiB (GPU 0; 23.99 GiB total capacity; 38.13 GiB already allocated; 0 bytes free; 39.88 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Find solution?

AdaptiveStep commented 3 months ago

i got OOM error as well. Any fixes yet?

Doubiiu commented 3 months ago

Hi. Are you using the latest code of gradio demo?

protector131090 commented 3 months ago

Hi. Are you using the latest code of gradio demo?

its been a month. Now i dont get the error. I gues it got fixed

protector131090 commented 3 months ago

i got OOM error as well. Any fixes yet?

it uses 21 out of my 24 gb vram. It works now with updated version.

shirubei commented 3 months ago

Maybe environment related. I run 1024 model with a 2080ti 22GB version GPU without memory problem last month (2/12).