myungsub / CAIN

Source code for AAAI 2020 paper "Channel Attention Is All You Need for Video Frame Interpolation"
MIT License
323 stars 43 forks source link

how to interpolate a frame at an arbitray time #4

Closed chen-san closed 4 years ago

chen-san commented 4 years ago

Hey, buddy, I like your model so much after I tried some video samples. This is the STATE-OF-THE-ART model and an amaaaaazing work!!!! And, you are genious, buddy. Recently, I was wondering how to interpolate a frame at an arbitray time like t=0.2. Unlike optical flow methods, kernel-based methods are only able to interpolate a single frame at t=0.5(t=0.25, t=0.75......). Do you think is it possible to take a temporal variable t into the model and train it? I am looking forward to your answer.

myungsub commented 4 years ago

Hi @chen-san , thanks for your interest in our work. As you mentioned, it's quite straightforward with optical flow-based methods, but our model currently cannot handle arbitrary timestep effectively. This is mainly due to how our model is trained, since it used Vimeo90K-Triplet dataset which, as its name shows, is composed of triplets. Training with a specific temporal variable can be possible, but I think it will take quite a long training schedule and enough data (for various intermediate 0 < t < 1). I've previously tried (slightly) training with the temporal variable, but it was quite difficult to gather high-quality high-fps data to use for training. There are some 240-fps video data (e.g. GOPRO, Adobe-240fps, etc.), but those videos contain blurry and noisy artifacts which led to low-quality outputs even after training. If you have good source videos, I think it's worth a try. (Also note that there are some recent works that try to make clean interpolations with blurry inputs, such as [this one] )

Hope it helped. Thanks!