open-mmlab / mmagic

OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄: Generative-AI (AIGC), easy-to-use APIs, awsome model zoo, diffusion models, for text-to-image generation, image/video restoration/enhancement, etc.
https://mmagic.readthedocs.io/en/latest/
Apache License 2.0
6.85k stars 1.05k forks source link

RTX3090 run video spuer resulotion inference error #2032

Open Git-Digger opened 11 months ago

Git-Digger commented 11 months ago

Prerequisite

Task

I'm using the official example scripts/configs to inference my video in task of video super resolution

Branch

main branch https://github.com/open-mmlab/mmagic

Environment

ubuntu22.04 python==3.9 mmagic==1.0.3 dev0 mmcv==2.0.1 mmengine==0.8.4 gcc ==11.3.0

Reproduces the problem - code sample

results = editor.infer(video=video, result_out_dir=result_out_dir)

Create a MMagicInferencer instance and infer

video = '/home/onepiece_demo.mp4' result_out_dir = '/home/onepiece_demo_res.mp4' mkdir_or_exist(os.path.dirname(result_out_dir)) editor = MMagicInferencer('basicvsr') results = editor.infer(video=video, result_out_dir=result_out_dir)

Reproduces the problem - command or script

video = '/home/onepiece_demo.mp4' result_out_dir = '/home/onepiece_demo_res.mp4' mkdir_or_exist(os.path.dirname(result_out_dir)) editor = MMagicInferencer('basicvsr') results = editor.infer(video=video, result_out_dir=result_out_dir)

Reproduces the problem - error message

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 13.18 GiB (GPU 0; 23.69 GiB total capacity; 12.28 GiB already allocated; 8.54 GiB free; 13.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Additional information

No response

感谢任何帮助者

Thanks

Git-Digger commented 11 months ago

09/14 17:37:32 - mmengine - WARNING - Failed to search registry with scope "mmagic" in the "function" registry tree. As a workaround, the current "function" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmagic" is a correct scope, or whether the registry is initialized. /home/server314/anaconda3/envs/exp/lib/python3.9/site-packages/mmengine/visualization/visualizer.py:196: UserWarning: Failed to add <class 'mmengine.visualization.vis_backend.LocalVisBackend'>, please provide the save_dir argument. warnings.warn(f'Failed to add {vis_backend.class}, '

Will it affect ???

MTKSHU commented 11 months ago

Prerequisite

Task

I'm using the official example scripts/configs to inference my video in task of video super resolution

Branch

main branch https://github.com/open-mmlab/mmagic

Environment

ubuntu22.04 python==3.9 mmagic==1.0.3 dev0 mmcv==2.0.1 mmengine==0.8.4 gcc ==11.3.0

Reproduces the problem - code sample

results = editor.infer(video=video, result_out_dir=result_out_dir)

Create a MMagicInferencer instance and infer

video = '/home/onepiece_demo.mp4' result_out_dir = '/home/onepiece_demo_res.mp4' mkdir_or_exist(os.path.dirname(result_out_dir)) editor = MMagicInferencer('basicvsr') results = editor.infer(video=video, result_out_dir=result_out_dir)

Reproduces the problem - command or script

video = '/home/onepiece_demo.mp4' result_out_dir = '/home/onepiece_demo_res.mp4' mkdir_or_exist(os.path.dirname(result_out_dir)) editor = MMagicInferencer('basicvsr') results = editor.infer(video=video, result_out_dir=result_out_dir)

Reproduces the problem - error message

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 13.18 GiB (GPU 0; 23.69 GiB total capacity; 12.28 GiB already allocated; 8.54 GiB free; 13.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Additional information

No response

感谢任何帮助者

Thanks

Maybe you can try: editor = MMagicInferencer('basicvsr', extra_parameters={'window_size':1})