I noticed that inference_realesrgan_video.py outputs the temporary frame to png file. This way takes up a lot of storage space(~500GB for 24min input and 4K output) when video have numerous frames.
I optimized the code by streaming processing without temporary file. Howerver, inference_realesrgan_video.py has redundancy feature that hard to optimizing. I think inference_realesrgan_video.py need to focus on video-to-video so i delete some feature.
My pull requests will be release soon.
我注意到inference_realesrgan_video.py会输出大量的临时png文件,当视频有大量帧的时候,这种方式会浪费大量的储存空间(~500GB for 24分钟输入/4K输出)。
通过流处理的方式可以在无需临时文件的情况下很好地优化代码,但是代码有些冗余功能难以优化,于是我就砍掉了这部分功能,因为我认为inference_realesrgan_video.py应该专注于video-to-video.
过段时间我会提PR
I noticed that
inference_realesrgan_video.py
outputs the temporary frame to png file. This way takes up a lot of storage space(~500GB for 24min input and 4K output) when video have numerous frames. I optimized the code by streaming processing without temporary file. Howerver,inference_realesrgan_video.py
has redundancy feature that hard to optimizing. I thinkinference_realesrgan_video.py
need to focus on video-to-video so i delete some feature. My pull requests will be release soon.我注意到
inference_realesrgan_video.py
会输出大量的临时png文件,当视频有大量帧的时候,这种方式会浪费大量的储存空间(~500GB for 24分钟输入/4K输出)。 通过流处理的方式可以在无需临时文件的情况下很好地优化代码,但是代码有些冗余功能难以优化,于是我就砍掉了这部分功能,因为我认为inference_realesrgan_video.py
应该专注于video-to-video. 过段时间我会提PR