Open protector131090 opened 5 months ago
Which model did you try? For 512 model, I have added perframe_ae=True
into the config as pointed out by Issue #18, where there should be no OOM problem for the 512 model.
Which model did you try? For 512 model, I have added
perframe_ae=True
into the config as pointed out by Issue #18, where there should be no OOM problem for the 512 model.
Yes 512 words fine. how much vram 1024 model needs?
seems 18.3 GB from twitter.
seems 18.3 GB from twitter.
why cant i launch on 4090 then?
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 25.31 GiB (GPU 0; 23.99 GiB total capacity; 38.13 GiB already allocated; 0 bytes free; 39.88 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
seems 18.3 GB from twitter.
why cant i launch on 4090 then?
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 25.31 GiB (GPU 0; 23.99 GiB total capacity; 38.13 GiB already allocated; 0 bytes free; 39.88 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Find solution?
i got OOM error as well. Any fixes yet?
Hi. Are you using the latest code of gradio demo?
Hi. Are you using the latest code of gradio demo?
its been a month. Now i dont get the error. I gues it got fixed
i got OOM error as well. Any fixes yet?
it uses 21 out of my 24 gb vram. It works now with updated version.
Maybe environment related. I run 1024 model with a 2080ti 22GB version GPU without memory problem last month (2/12).
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 25.31 GiB (GPU 0; 23.99 GiB total capacity; 11.33 GiB already allocated; 9.91 GiB free; 12.47 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF