I encountered an error while running:
RuntimeError: CUDA out of memory. Tried to allocate 1.20 GiB (GPU 0; 12.00 GiB total capacity; 21.33 GiB already allocated; 0 bytes free; 25.16 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I'm using a dedicated graphics card with 12GB of memory,using 32GB of memory.
My max_split_size_mb is set to 512, and the batch size is set to 4. When running python inference_propainter.py --video .\inputs\video_completion\my.mp4 --mask .\inputs\video_completion\test_2.png --height 720 --width 1080 --neighbor_length 8 --ref_stride 8 --fp16, I encountered an error.
How can I solve the GPU memory issue without reducing the video size or resolution? Increasing the runtime is also an option.
thanks
I encountered an error while running: RuntimeError: CUDA out of memory. Tried to allocate 1.20 GiB (GPU 0; 12.00 GiB total capacity; 21.33 GiB already allocated; 0 bytes free; 25.16 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I'm using a dedicated graphics card with 12GB of memory,using 32GB of memory. My max_split_size_mb is set to 512, and the batch size is set to 4. When running python inference_propainter.py --video .\inputs\video_completion\my.mp4 --mask .\inputs\video_completion\test_2.png --height 720 --width 1080 --neighbor_length 8 --ref_stride 8 --fp16, I encountered an error. How can I solve the GPU memory issue without reducing the video size or resolution? Increasing the runtime is also an option. thanks