Open phil-man-git-hub opened 1 year ago
Hi Phil, the reason newer models from RIFE did not make its way to flameTimewarp yet is simply because the v3 and v4 model's quality is considerably worse comparable to v2. The whole direction of RIFE development seem to be towards making the model lighter and faster while retaining reasonably good result, whilst for us it is more important to be able to achieve better results even in exchange with more rendering time and GPU memory spent.
The good thing about v4 models is that they're trained on 7 frame sequences and in this way seem to be pretty good in rational warping, where v2 model are taking iterations as its only able to estimate frame halfway between two given.
At the moment in v0.5 some work has been already made to make it able to handle linear footage with super-bright and negative values, and I've been experimenting with adding RAFT optical flow estimation model to help RIFE perform with less errors, especially on repetitive textures.
My next step would be to try to modify v4 model with initial flow taken from RAFT and trying to re-train it with some more data taken from real Arri cameras along with some 35mm scans I happen to have filmed using the new dynamic range compression approach introduced in v0.5 and extend Unet artefact correction that sits at the very end of the processing chain (and is actually removed in recent v4 models) to iterative warp layers as well. It is difficult to give a time-frame as on my current hardware it takes about 5 days to train current model from scratch, but it should be possible to go step by step and try to add more relevant data to modify particular layers instead of the whole model.
Hope that makes sense and I'm happy to discuss and try any other ideas and approaches
There is a newer model for Real-Time Intermediate Flow Estimation for Video Frame Interpolation.
Is it possible to update the flameTimewarpML?
Thank you.
Phil MAN