Open hd926 opened 2 years ago
This currently isn't possible due to how node execution works. You can think of it as each input and output being one item. Frame interpolation by its nature takes in multiple frames and outputs more than that. So, it is just incompatible with how chaiNNer works right now.
However, this should be possible once we rewrite how iterators work. Basically, our plan is to make each input and output allow for iterable sequences. This will mean that a node could take in a frame sequence of any length and output a frame sequence of a longer length and everything would just work.
But, that requires doing that rewrite, which is going to take quite some time to get out.
Makes sense, i thought that was the case, but still it was worth asking
Yeah it's something I've wanted to add for a while but it just can't work without a really hacky and jank solution
Maybe if you add an "Film OPEN" and "FILM SAVE" Object?
No. Well, that's basically how it will work when we rewrite how iteration works, but to run something on each frame right now we need an iterator, and iterators only work on one frame at a time
I think the limitation is gone, I love this software and would really appreciate if interpolation is implemented
Unfortunately this limitation is still there. Even though we drastically changed how iterators work, they still have many of the same limitations. They're still basically just for loops mapping an array to the downstream chain. We have a lot to do still before we can have many-to-many iterable mappings that change sequence length mid-chain.
@joeyballentine Couldn't we make this work if the Fame Interpolation node takes a video instead of an iterator as input?
Basically, we could support video upscaling models by making the Upscale Video node take a video file as input, instead of an iterator of frames. This would make it easy to get x-many frames ahead and behind, which are usually necessary for temporal stability. The Upscale Video node would have the same outputs as Load Video, so users can then decide what to do with the upscaled/interpolated video.
I think this would be a workable temporary solution that would allow us to at least get support for simple use cases.
That could work, but then you wouldn't be able to do any pre-processing before upscaling or interpolating the video
Yes, but we would have frame interpolation and support video models. It's a temporary solution until we can get the real thing.
That makes sense then. I'd be ok with that
Great news to see you are considering it, I strongly believe this would increase the software utilization as a very useful toolkit, i don't know if it helps but am attaching a workflow from comfyui (a node based AI gen software) to give some idea, I believe the concept is it is loading 2 image frames and is using Rife or Film models to generate the intermediate frames, and continues looping frames until all the images are processed, maybe you can take a look at the code implementation for some ideas.
That could work, but then you wouldn't be able to do any pre-processing before upscaling or interpolating the video
Maybe a node could be created to take a sequence of images and accumulate them into a video with a defined frame rate, from there the frame interpolation could be then ran on the video.
Motivation The only GUI for frame interpolation is flowframes from what i know, but it is unfortunately windows only. I think it would be great to also interpolate frames alongside upscaling.
Description Would it be possible to add a node simular to the upscale node so you can interpolate frames or a video with the model you select like rife for example?