Open lassefschmidt opened 1 year ago
All of our models naturally support arbitary timestamp interpolation, although they are trained on Vimeo90K for intermediate frame interpolation. This is achieved by scaling the bi-directional optical flow between input frames to arbitary timestamp for forward-warping.
To use this feature during testing, what you only need to do is changing the time_period
parameter. For example, you can set it as 0.1, or 0.2, or 0.9 (any floating number between 0 and 1).
"""python3 -m demo.interp_imgs \
--frame0 demo/images/beanbags0.png \
--frame1 demo/images/beanbags1.png \
--time_period 0.5
"""
Just have a try :)
Hi Xin ! Thank you for your quick reply :-)
I experimented a lot with UPR-Net across the last months, dove deeper into many papers (e.g. the softsplat paper for forward warping) and I am very happy with the results.
You mention at the end of your paper that you will train with Vimeo90K-Septuplet to investigate whether multi-frame interpolation training helps to improve multi-frame interpolation in testing.
I took your base version and fine-tuned it using Adobe+GoPro datasets. However, multi-frame interpolation during training does not always help because the camera (for Adobe+GoPro) moves quite a lot and you have, by definition, less information when you interpolate frames that are further apart.
However, I was able to significantly boost multi-frame performance in testing by using the cycle consistency scheme proposed in 2019 (https://github.com/NVIDIA/unsupervised-video-interpolation). I am attaching you an overview for a custom scene I am evaluating the models on. The orange line is the initial UPR-Net (base version using your pre-trained weights). The green line is the same model after having fine-tuned on Adobe+GoPro for ~25K steps using the cycle consistency scheme. The blue line is one of the Super SloMo models from the cycle consistency paper (pre-trained on Adobe, then youtube + sintel).
Thanks again :-))
Hi Xin ! Thank you for your quick reply :-)
I experimented a lot with UPR-Net across the last months, dove deeper into many papers (e.g. the softsplat paper for forward warping) and I am very happy with the results.
You mention at the end of your paper that you will train with Vimeo90K-Septuplet to investigate whether multi-frame interpolation training helps to improve multi-frame interpolation in testing.
I took your base version and fine-tuned it using Adobe+GoPro datasets. However, multi-frame interpolation during training does not always help because the camera (for Adobe+GoPro) moves quite a lot and you have, by definition, less information when you interpolate frames that are further apart.
However, I was able to significantly boost multi-frame performance in testing by using the cycle consistency scheme proposed in 2019 (https://github.com/NVIDIA/unsupervised-video-interpolation). I am attaching you an overview for a custom scene I am evaluating the models on. The orange line is the initial UPR-Net (base version using your pre-trained weights). The green line is the same model after having fine-tuned on Adobe+GoPro for ~25K steps using the cycle consistency scheme. The blue line is one of the Super SloMo models from the cycle consistency paper (pre-trained on Adobe, then youtube + sintel).
Thanks again :-))
A great job! And I think you can write a research paper to summarize your findings and give more insights about it:)
All of our models naturally support arbitary timestamp interpolation, although they are trained on Vimeo90K for intermediate frame interpolation. This is achieved by scaling the bi-directional optical flow between input frames to arbitary timestamp for forward-warping.
To use this feature during testing, what you only need to do is changing the
time_period
parameter. For example, you can set it as 0.1, or 0.2, or 0.9 (any floating number between 0 and 1). """python3 -m demo.interp_imgs --frame0 demo/images/beanbags0.png --frame1 demo/images/beanbags1.png --time_period 0.5 """Just have a try :)
how can i get 4x (3 Interpolation) images ?
Hi there! Thank you for your great work, very interesting read :)
I saw that your repo also supports arbitrary timestamp interpolation. However, to use this feature, I assume that I would have to train my custom model? All checkpoints seem to be based on Vimeo90K so I would assume that the model has not learned how to deal with different timestamp values and would always predict more or less the same frame no matter the value?
Thank you again and best, Lasse