Closed Dylan-Jinx closed 5 months ago
I meet the same issue
My Enviorment: PyTorch 1.11.0 Python 3.8(ubuntu20.04) Cuda 11.3 GPU:RTX A5000 * 1;video memory:24GB CPU:14core Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz ram:30GB system disk:20 GB data disk:50GB SSD While running the demo based on the basicvsr_plusplus_reds4.pth checkpoint, I put in a blurry video of myself for several tens of seconds, and after a while the process was killed. I tried to set -- max-seq-len=1 but still the process was killed.How can I solve this problem?
@huhai463127310 I solve this problem. I think you can try to use a test video for three or four seconds. When I use minimal video to input this demo can success run.If you have any other questions, you can email me or leave your email.
I found the issue that caused this problem. The demo will read the whole video into memory, and extract all the frames into array, which consumes laaaaarge amount of memory. I tried to fix it, and you can view it here: https://github.com/CarlGao4/BasicVSR_PlusPlus/blob/master/demo/restoration_large_video_demo.py Currently, input can only be a video file, output supports frames or stdout
@CarlGao4 ,hello,have you solve the read all frame of video in memory problem?and can you tell me the way to solve this problem as if you solved or have idea
@CarlGao4 ,hello,have you solve the read all frame of video in memory problem?and can you tell me the way to solve this problem as if you solved or have idea
Just download this file and simply run the same command you used to process your video, replacing restoration_video_demo.py
with restoration_large_video_demo.py
. Please remember that this script only accept video input and picture sequence output.
@CarlGao4 ,hello,have you solve the read all frame of video in memory problem?and can you tell me the way to solve this problem as if you solved or have idea
Just download this file and simply run the same command you used to process your video, replacing
restoration_video_demo.py
withrestoration_large_video_demo.py
. Please remember that this script only accept video input and picture sequence output.
16GB RAM and 6GB VRAM, still not enough for a 1.7GB one-hour long 1080p video... I see GPU usage reached 100% first. But after a few second it have reached OOM.
load checkpoint from local path: chkpts/basicvsr_plusplus_reds4.pth
input image size: (1920, 1080)
0%| | 0/215834 [00:25<?, ?frame/s]
Traceback (most recent call last):
File "BasicVSR_PlusPlus\demo\restoration_large_video_demo.py", line 104, in <module>
main()
File "BasicVSR_PlusPlus\demo\restoration_large_video_demo.py", line 92, in main
res = model(lq=data_chunk.to(device), test_mode=True)["output"].cpu()
File "anaconda3\envs\basicvsrplusplus\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "anaconda3\envs\basicvsrplusplus\lib\site-packages\mmcv\runner\fp16_utils.py", line 116, in new_func
return old_func(*args, **kwargs)
File "basicvsr_plusplus\mmedit\models\restorers\basic_restorer.py", line 75, in forward
return self.forward_test(lq, gt, **kwargs)
File "basicvsr_plusplus\mmedit\models\restorers\basicvsr.py", line 175, in forward_test
output = self.generator(lq)
File "anaconda3\envs\basicvsrplusplus\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "basicvsr_plusplus\mmedit\models\backbones\sr_backbones\basicvsr_pp.py", line 353, in forward
return self.upsample(lqs, feats)
File "basicvsr_plusplus\mmedit\models\backbones\sr_backbones\basicvsr_pp.py", line 269, in upsample
hr = self.lrelu(self.upsample2(hr))
File "anaconda3\envs\basicvsrplusplus\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "basicvsr_plusplus\mmedit\models\common\upsample.py", line 50, in forward
x = F.pixel_shuffle(x, self.scale_factor)
RuntimeError: CUDA out of memory. Tried to allocate 7.91 GiB (GPU 0; 6.00 GiB total capacity; 10.46 GiB already allocated; 0 bytes free; 10.96 GiB reserved in total by PyTorch)
You can try reducing --max-seq-len
. I remembered that I could run it with only 4GiB to scale up 1080p when it is set to 1 (though it may reduce quality)
@CarlGao4 ,hello,have you solve the read all frame of video in memory problem?and can you tell me the way to solve this problem as if you solved or have idea
Just download this file and simply run the same command you used to process your video, replacing
restoration_video_demo.py
withrestoration_large_video_demo.py
. Please remember that this script only accept video input and picture sequence output.
Hello, I encountered this problem while using your file. Why is this?
Maybe try upgrading Python to 3.9? Please also remember to install torch<=2.0.1 and mmcv-full<=1.6.0
Maybe try upgrading Python to 3.9? Please also remember to install torch<=2.0.1 and mmcv-full<=1.6.0
Thank you, after upgrading Python to 3.9, it can run, but the running speed is very slow, processing about one frame per second, is this normal?
Yes, it's about 1frame/s running on 4060 when enlarging 540p videos
You can try reducing
--max-seq-len
. I remembered that I could run it with only 4GiB to scale up 1080p when it is set to 1 (though it may reduce quality)
Well I've tried to add--max-seq-len 1
, which produced the result above. So my poor 2060 with 6GB VRAM maybe just can't handle it
Maybe you can try the 2x model?
Maybe you can try the 2x model?
Do you have the 2x model?
Maybe try upgrading Python to 3.9? Please also remember to install torch<=2.0.1 and mmcv-full<=1.6.0
Thank you, after upgrading Python to 3.9, it can run, but the running speed is very slow, processing about one frame per second, is this normal?
What version of the CUDA toolkit are you using? When the version is too high, it may prevent the installation of a lower version of 'mmcv-full,' and when the version is too low, it may result in the following error:RuntimeError: CUDA error: invalid device function;Segmentation fault (core dumped)
What version of the CUDA toolkit are you using? When the version is too high, it may prevent the installation of a lower version of 'mmcv-full,' and when the version is too low, it may result in the following error:RuntimeError: CUDA error: invalid device function;Segmentation fault (core dumped)
What version of the CUDA toolkit are you using? When the version is too high, it may prevent the installation of a lower version of 'mmcv-full,' and when the version is too low, it may result in the following error:RuntimeError: CUDA error: invalid device function;Segmentation fault (core dumped)
What version of the CUDA toolkit are you using? When the version is too high, it may prevent the installation of a lower version of 'mmcv-full,' and when the version is too low, it may result in the following error:RuntimeError: CUDA error: invalid device function;Segmentation fault (core dumped)
What version of the CUDA toolkit are you using? When the version is too high, it may prevent the installation of a lower version of 'mmcv-full,' and when the version is too low, it may result in the following error:RuntimeError: CUDA error: invalid device function;Segmentation fault (core dumped)
My CUDA version is 11.3
You should use the same CUDA version as PyTorch
What version of the CUDA toolkit are you using? When the version is too high, it may prevent the installation of a lower version of 'mmcv-full,' and when the version is too low, it may result in the following error:RuntimeError: CUDA error: invalid device function;Segmentation fault (core dumped)
What version of the CUDA toolkit are you using? When the version is too high, it may prevent the installation of a lower version of 'mmcv-full,' and when the version is too low, it may result in the following error:RuntimeError: CUDA error: invalid device function;Segmentation fault (core dumped)
My CUDA version is 11.3
I've tried, then I can't install mmcv-full under 1.6.0, don't know why
What version of the CUDA toolkit are you using? When the version is too high, it may prevent the installation of a lower version of 'mmcv-full,' and when the version is too low, it may result in the following error:RuntimeError: CUDA error: invalid device function;Segmentation fault (core dumped)
What version of the CUDA toolkit are you using? When the version is too high, it may prevent the installation of a lower version of 'mmcv-full,' and when the version is too low, it may result in the following error:RuntimeError: CUDA error: invalid device function;Segmentation fault (core dumped)
My CUDA version is 11.3
I've tried, then I can't install mmcv-full under 1.6.0, don't know why
What does your error look like?
What version of the CUDA toolkit are you using? When the version is too high, it may prevent the installation of a lower version of 'mmcv-full,' and when the version is too low, it may result in the following error:RuntimeError: CUDA error: invalid device function;Segmentation fault (core dumped)
What version of the CUDA toolkit are you using? When the version is too high, it may prevent the installation of a lower version of 'mmcv-full,' and when the version is too low, it may result in the following error:RuntimeError: CUDA error: invalid device function;Segmentation fault (core dumped)
My CUDA version is 11.3
I've tried, then I can't install mmcv-full under 1.6.0, don't know why
What does your error look like?
You can refer to the official documentation of mmedit or mmagic to download, that’s how I did it.
I added the large file to the latest mmagic, and an error was reported after it was executed, How do I change the code to adapt it to the latest mmagic?
I added the large file to the latest mmagic, and an error was reported after it was executed, How do I change the code to adapt it to the latest mmagic?
I did not use the latest version of mmagic. Instead, I used the mmedit (an old version of mmagic) from the source code provided by the author on GitHub. Here is the URL: https://zyhmmediting-zh.readthedocs.io/en/dev-1.x/model_zoo/%E8%A7%86%E9%A2%91%E8%B6%85%E5%88%86%E8%BE%A8%E7%8E%87.html. I used it to set up the environment and then ran it using the author’s source code.
My Enviorment: PyTorch 1.11.0 Python 3.8(ubuntu20.04) Cuda 11.3 GPU:RTX A5000 * 1;video memory:24GB CPU:14core Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz ram:30GB system disk:20 GB data disk:50GB SSD While running the demo based on the basicvsr_plusplus_reds4.pth checkpoint, I put in a blurry video of myself for several tens of seconds, and after a while the process was killed. I tried to set -- max-seq-len=1 but still the process was killed.How can I solve this problem?