Mukosame / Zooming-Slow-Mo-CVPR-2020

Fast and Accurate One-Stage Space-Time Video Super-Resolution (accepted in CVPR 2020)
GNU General Public License v3.0
917 stars 164 forks source link

Change scale in video_to_zsm? #59

Open RKelln opened 3 years ago

RKelln commented 3 years ago

Is it possible to zoom to other scales rather than 4?

I tried changing the scale to 2 in video_to_zsm.py but it only used the top left quarter of the input image in the output. Is this something baked into the model or somewhere I should look to make some changes?

Thanks!

Mukosame commented 3 years ago

Hi @RKelln , thanks for your interest in this work! Yes, you can extend this framework to x2 upscale. However our current weights are for x4 only. To make it work correctly, you need to train another x2 model by changing here to 2, and the GT size be twice of the LR size.

RKelln commented 3 years ago

Thanks for the quick reply Mukosame. In the paper you mention training on "2 Nvidia Titan XP GPU". How long was the training for? I probably won't have time right now to train something new up for my current project, but can consider it for the future. I have a few other ideas to try out too.

I found a DCNv2 repo that works with pytorch 1.8 and integrated into my fork, if you're curious. I just mashed it in for quick testing, but might be something useful: https://github.com/RKelln/Zooming-Slow-Mo-CVPR-2020

Mukosame commented 3 years ago

Hi @RKelln , as for the training time, it usually depends on how many iterations you set. From past feedbacks under this repo, a common time consumption would be ~2min/100 iterations. Thanks for bringing up the DCNv2 of pytorch 1.8! A lot of people are suffering from compilation issues with gcc. I will test our current weight's compatibility with the pytorch DCN and upgrade this framework.

RKelln commented 3 years ago

The credit for DCN goes to tteepe: https://github.com/tteepe/DCNv2 You don't even need to compile. :1st_place_medal: