Closed further-home closed 3 years ago
Yes same on my side. You need to chop the whole image into smaller patches, process them and combine them back.
Can you the code of chop the whole image into smaller patches to put the combined code under the project, or give me a copy?
I've made blocks of images, but how can I merge them
------------------ 原始邮件 ------------------ 发件人: "Mukosame/Zooming-Slow-Mo-CVPR-2020" @.>; 发送时间: 2021年10月21日(星期四) 凌晨1:39 @.>; @.**@.>; 主题: Re: [Mukosame/Zooming-Slow-Mo-CVPR-2020] OOM (Issue #64)
Closed #64.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
Excuse me, I have segmented the image and combined the image super segmentation rate into a video. But compared with the original video, the video is a little distorted, and I only have 10s video. It takes 5 hours to generate super-resolution video
------------------ 原始邮件 ------------------ 发件人: "Mukosame/Zooming-Slow-Mo-CVPR-2020" @.>; 发送时间: 2021年10月21日(星期四) 凌晨1:39 @.>; @.**@.>; 主题: Re: [Mukosame/Zooming-Slow-Mo-CVPR-2020] OOM (Issue #64)
Closed #64.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
It is really weird that it takes ~5hrs to process all the videos. Would you mind sharing me how you conduct the chop-and-forward inference?
Hello, I didn't cut the image into pieces in the network. My processing flow is as follows: first, the video is divided into frames, each frame is divided into blocks, then each image is sent to the network for super-resolution, then each image is spliced, and finally combined into video.
------------------ 原始邮件 ------------------ 发件人: "Mukosame/Zooming-Slow-Mo-CVPR-2020" @.>; 发送时间: 2021年11月2日(星期二) 上午7:30 @.>; @.**@.>; 主题: Re: [Mukosame/Zooming-Slow-Mo-CVPR-2020] OOM (Issue #64)
It is really weird that it takes ~5hrs to process all the videos. Would you mind sharing me how you conduct the chop-and-forward inference?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
I think the reason why I am slow is that I cut 1920 1080 images into 270 240 images, and each frame is 32 images. If 100 images are divided into 3200 images, super-resolution is required, the super-resolution scale factor is 4, and the super-divided image is 1080 960. Finally, I synthesize a 7680 4320 image, and finally a 7680 * 4320 video, What steps can you improve to improve the efficiency of the algorithm ------------------ 原始邮件 ------------------ 发件人: "Mukosame/Zooming-Slow-Mo-CVPR-2020" @.>; 发送时间: 2021年11月2日(星期二) 上午7:30 @.>; @.**@.>; 主题: Re: [Mukosame/Zooming-Slow-Mo-CVPR-2020] OOM (Issue #64)
It is really weird that it takes ~5hrs to process all the videos. Would you mind sharing me how you conduct the chop-and-forward inference?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
I see. The problem is: which part consumes the most time? For example, if processing a 4-frame sequence of 270240 to 7 frames of 1080960, how long will this process take?
Each piece from 270 240 to 1080 960 is processed very quickly, maybe one per second. However, the number of frames increases, but due to the large number of blocks, for example, 3200 blocks, only 100 frames, that is, five seconds of video. If it is 20000 frames, there may be hundreds of thousands or even millions of images for super-resolution, which leads to low efficiency. By the way, can I handle this process? Can you give me a suggestion? ------------------ 原始邮件 ------------------ 发件人: "Mukosame/Zooming-Slow-Mo-CVPR-2020" @.>; 发送时间: 2021年11月2日(星期二) 上午9:33 @.>; @.**@.>; 主题: Re: [Mukosame/Zooming-Slow-Mo-CVPR-2020] OOM (Issue #64)
I see. The problem is: which part consumes the most time? For example, if processing a 4-frame sequence of 270240 to 7 frames of 1080960, how long will this process take?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
if processing a 4-frame sequence of 270240 to 7 frames of 1080960 is very fast.However, due to the large amount of data, it may take a day to process two minutes of video.I'm going to retrain the network. But I don't have videos. I have 200000 images. Can I use your network for training?
------------------ 原始邮件 ------------------ 发件人: "Mukosame/Zooming-Slow-Mo-CVPR-2020" @.>; 发送时间: 2021年11月2日(星期二) 上午9:33 @.>; @.**@.>; 主题: Re: [Mukosame/Zooming-Slow-Mo-CVPR-2020] OOM (Issue #64)
I see. The problem is: which part consumes the most time? For example, if processing a 4-frame sequence of 270240 to 7 frames of 1080960, how long will this process take?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
Excuse me , Why i can not see the file, ' from data import create_dataloader, create_dataset from models import create_model' i want to train my data ,but always unsuccessful .it has this error Traceback (most recent call last): File "train.py", line 196, in <module> main() File "train.py", line 106, in main train_size = int(math.ceil(len(train_set) / dataset_opt['batch_size'])) File "/home/hzh/Zooming-Slow-Mo-CVPR-2020/codes/data/Vimeo7_dataset.py", line 238, in len return len(self.paths_GT['keys']) TypeError: list indices must be integers or slices, not str i print(train_set),it is <data.Vimeo7_dataset.Vimeo7Dataset object at 0x7f27fa502940> . I do not know how to solve . Can you help me .Thanks
------------------ 原始邮件 ------------------ 发件人: "Mukosame/Zooming-Slow-Mo-CVPR-2020" @.>; 发送时间: 2021年11月2日(星期二) 上午9:33 @.>; @.**@.>; 主题: Re: [Mukosame/Zooming-Slow-Mo-CVPR-2020] OOM (Issue #64)
I see. The problem is: which part consumes the most time? For example, if processing a 4-frame sequence of 270240 to 7 frames of 1080960, how long will this process take?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
Can i test my data 1920*1080 ?it always oom