When performing frame interpolation on anime footage, conventional de-duplicate approach such as locating duplicate frames and removal, time remapping, have many drawbacks, like losing textures of background, failure to correctly handle multiple characters drawn in different cadence in a single scene. Therefore, they cannot be applied in production effectively. However, with the advancement of video frame interpolation technology based on AI, it is proved feasible to repeatedly update the original frames to obtain high quality interpolated output of anime. This project proposes a novel anime deduplication method based on a decent VFI algorithm of GMFSS. It does not require additional procedure of processing the frame sequence or deep neural networks, and produces smooth, high quality output by removing duplicate frames in anime adequately.
Online Colab demo for AFI-ForwardDeduplicate: [Colab]
normalize the source video to 24000/1001 fps by following command using ffmpeg (If the INPUT video framerate is around 23.976, skip this step.)
ffmpeg -i INPUT -crf 16 -r 24000/1001 -preset slow -c:v libx265 -x265-params profile=main10 -c:a copy OUTPUT
open the video and check out it's max consistent deduplication counts, (3 -> on Three, 2 -> on Two, 0 -> AUTO) (If the INPUT video framerate is around 23.976, skip this step.)
run the follwing command to finish interpolation (N_FORWARD = max_consistent_deduplication_counts - 1) (Under the most circumstances, -nf 0 can automatically determine an appropriate n_forward value)
python interpolate_video_forward.py -i [VIDEO] -o [OUTPUTDIR] -nf [N_FORWARD] -t [TIMES] -m [MODEL_TYPE] -s -st 12 -scale [SCALE] -stf -c -half
# or use the following command to export video at any frame rate
python interpolate_video_forward_anyfps.py -i [VIDEO] -o [OUTPUTDIR] -nf [N_FORWARD] -fps [OUTPUT_FPS] -m [MODEL_TYPE] -s -st 12 -scale [SCALE] -stf -c
run the follwing command or custom command to merge the output frames with the audio of source video
ffmpeg -r [24000/1001 * TIMES] -i [OUTPUTDIR]/%09d.png -i [VIDEO] -map 0:v -map 1:a -crf 16 -preset slow -c:v libx265 -x265-params profile=main10 -c:a copy [FINAL_OUTPUT]
# or use the following command to export video at any frame rate
ffmpeg -r [OUTPUT_FPS] -i [OUTPUTDIR]/%09d.png -i [VIDEO] -map 0:v -map 1:a -crf 16 -preset slow -c:v libx265 -x265-params profile=main10 -c:a copy [FINAL_OUTPUT]
example(smooth a 23.976fps video with on three and interpolate it to 60fps):
ffmpeg -i E:/Myvideo/01_src.mkv -crf 16 -r 24000/1001 -preset slow -c:v libx265 -x265-params profile=main10 -c:a copy E:/Myvideo/01.mkv
python interpolate_video_forward_anyfps.py -i E:/MyVideo/01.mkv -o E:/frame_seq_output -nf 2 -fps 60 -m gmfss -s -st 12 -scale 1.0 -stf -c
ffmpeg -r 60 -i E:/frame_seq_output/%09d.png -i E:/MyVideo/01.mkv -map 0:v -map 1:a -crf 16 -preset slow -c:v libx265 -x265-params profile=main10 -c:a copy E:/final_output/01.mkv
The "n_forward" parameter acts like the number of times the algorithm performs Spatiotemporal TTA (Spatiotemporal Test Time Augmentation) operations. Performing too many TTA operations may further improve smoothness and interpolation performance but lead to blurriness.
This method will change the animation rhythm to a certain extent
Thanks for Q8sh2ing implement the Online Colab Demo.