Open steelywing opened 1 year ago
This should fix #22 and #15
the current code use the next frame even the next frame is changed scene, so it will be distorted.
Some scene change still cannot be detected
Perhaps we can take a look at how this new project AFI-ForwardDeduplicate may help further improve the smoothess
Perhaps we can take a look at how this new project AFI-ForwardDeduplicate may help further improve the smoothess Problem with integrating features is messy code, making the process extremely annoying, that project is amazing but the code is:-
1-all written in a single file with no classes. 2-functions placed around everywhere. 3-big chucks of code under each other. 4-vairables defined inside the coded are used inside functions and other pieces of code. 5-asserts used rather than exceptions.
I spend two days to refactor this project, that one have the same problems.
I guess we can start by the refactor of this project first (I will post it soon), then we can integrate AFI-ForwardDeduplicate
New method find the next key frame (the frame is not same as previous frame) to inference (max skip 2 frames). Normally anime use 1 image for 2~3 frames (8/12 FPS) in 24FPS video.
Demo
see about 00:07
the upside is inference with current method, the down side is inference with new method.
https://user-images.githubusercontent.com/2720049/222919627-119fd346-7473-413a-a7bf-7b8f0d59c656.mp4
Original video (not inferenced)
https://user-images.githubusercontent.com/2720049/222919959-32d17012-a1e1-405b-9313-57fb40e6db14.mp4
Model before 3.9
The model before 3.9 seem does not have
timestep
parameter, I have not tested for model < 3.9