ckkelvinchan / BasicVSR_PlusPlus

Official repository of "BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment"
Apache License 2.0
589 stars 64 forks source link

CUDA out of memory #10

Open muhammad-ahmed-ghani opened 2 years ago

muhammad-ahmed-ghani commented 2 years ago

Hi @ckkelvinchan , I have tried running inference on RTX2060 with 6gb of vram but it doesn't perform inference on low memory. I tried using video.mp4 as input. Is their any parameter or option to increase the inference time by some tile or batch size limit but perform inference ?

ckkelvinchan commented 2 years ago

Hello, you may use a smaller --max-seq-len here. With this value, the sequence is cut into multiple sequence for processing.

Dylan-Jinx commented 1 year ago

@muhammad-ahmed-ghani Hello, have you solved this problem?

muhammad-ahmed-ghani commented 1 year ago

@muhammad-ahmed-ghani Hello, have you solved this problem?

Yeah as @ckkelvinchan suggest above

https://github.com/ckkelvinchan/BasicVSR_PlusPlus/issues/10#issuecomment-1116952418

to set the max sequence length to the minimum possible value according to your gpu memory.

Dylan-Jinx commented 1 year ago

@muhammad-ahmed-ghani my gpu memory is 24G,I try to set this param equal 1,but linux system killed this process,I don't know why?All I found was that the gpu memory was always under heavy loade

huhai463127310 commented 1 year ago

@Dylan-Jinx Maybe the resolution of your input video is too large. The current code supports only x4 upsampling. You need to modify the code and retrain the network to work on x2. You can use a lower resolution video to test or retrain the network for your upsampling magnification.

My case:

The GPU OOM occurred when I use input image size is 1920 X 1080. But it's well when I changed the input resolution to 1200 X 800.

My script:

python demo/restoration_video_demo.py configs/restorers/basicvsr_plusplus/basicvsr_plusplus_c64n7_8x1_600k_reds4.py pth/basicvsr_plusplus_c64n7_8x1_600k_reds4_20210217-db622b2f.pth data/input/test2/ data/output/test2/ --max-seq-len=1

My hardware:

My env:

My mmediting code branch: https://github.com/open-mmlab/mmediting/tree/master

FlorianDelcour commented 1 year ago

Hello, you may use a smaller --max-seq-len here. With this value, the sequence is cut into multiple sequence for processing.

Hey, I think your link is not valid anymore. Can you update it ? Thanks !