ltkong218 / IFRNet

IFRNet: Intermediate Feature Refine Network for Efficient Frame Interpolation (CVPR 2022)
MIT License
259 stars 23 forks source link

Video frame interpolation usage #7

Open Kupchanski opened 2 years ago

Kupchanski commented 2 years ago

Hello! Thanks for your work!

Can you, please, suggest the best way to use this model to interpolate video? Just take 2 neibour frames of video, infer them and then stitch new frames back?

Should model be retrained for each video or it can be used to interpolate any video with good quality?

Thank you in advance for your reply!

ltkong218 commented 2 years ago

Thanks for your interest.

Our pretrained models can already get relative good frame interpolation visual quality on common videos.

To get best visual quality on your specific videos, you can load the provided checkpoint, and then fine-tune IFRNet on your collected video datasets, which should contain sufficient quantity of frame sequences with diverse motion and texture.

The model do not need to be retrained for each video, but to be retrained on datasets with all these videos once. Then, you can get good frame interpolation quality with any video in the same domain of this training dataset. For training and inference, you can refer to train_vimeo90k.py, demo_2x.py for 2x interpolation and train_gopro.py, demo_8x.py for 8x interpolation.