hitachinsk / FGT

[ECCV 2022] Flow-Guided Transformer for Video Inpainting
https://hitachinsk.github.io/publication/2022-10-01-Flow-Guided-Transformer-for-Video-Inpainting
MIT License
300 stars 31 forks source link

RuntimeError:cuda out of memory #24

Closed wangdi9 closed 1 year ago

wangdi9 commented 1 year ago

now,I have 4 Tesla T4 GPU(every one is 16G),when I run video_inpaintint.py to object removal in my video(1440x720),but report cuda memory error,only one GPU is used.

RuntimeError: CUDA out of memory. Tried to allocate 6.95 GiB (GPU 0; 14.76 GiB total capacity; 9.42 GiB already allocated; 2.78 GiB free; 10.66 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

so how to use my all 4 GPUs?,(I hope not change the image size)

Originally posted by @wangdi9 in https://github.com/hitachinsk/FGT/issues/23#issuecomment-1366292013

hitachinsk commented 1 year ago

Currently, we don't implement the inference processing on multiple GPUs, maybe I will implement this function in the future. If you want to run the inference codes on the videos with large resolution. I provided some suggestions in #23. The keypoint is that you should reduce the frame number in the attention process. Such operation makes FGT feasible to process videos with larger resolution but the performance will degrade.

Btw, I recommend our another video inpainting project ISVI. This video inpainting algorithm could process videos with larger resolution (up to 4K).

wangdi9 commented 1 year ago

Thank for your kind and quick help! solved a big puzzle for me.I understand. Thanks again!