When I run the Quick Demo, the following CUDA error message is displayed:
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 23.68 GiB total capacity; 18.25 GiB already allocated; 22.06 MiB free; 19.86 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I monitored the GPU during the process and noticed that as the frame count increases, the VRAM gradually accumulates until an error occurs. I am using an RTX 3090. Is this a normal phenomenon, or is there a problem with VRAM not being released properly during the process?
Additionally, if I switch to smaller data, the accumulation still occurs but does not max out the VRAM; if I switch to the eval model and run it in headless mode, the above issue does not occur.
When I run the Quick Demo, the following CUDA error message is displayed:
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 23.68 GiB total capacity; 18.25 GiB already allocated; 22.06 MiB free; 19.86 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I monitored the GPU during the process and noticed that as the frame count increases, the VRAM gradually accumulates until an error occurs. I am using an RTX 3090. Is this a normal phenomenon, or is there a problem with VRAM not being released properly during the process?
Additionally, if I switch to smaller data, the accumulation still occurs but does not max out the VRAM; if I switch to the eval model and run it in headless mode, the above issue does not occur.