The All-In-One GMFSS: Dedicated for Anime Video Frame Interpolation
2023-06-25: Thanks to AnimeRun's related work, we have updated one of union's fine-tune models.
Our code is developed based on PyTorch 1.13.1, CUDA 11.8 and Python 3.9. Lower version pytorch should also work well.
To install, run the following commands:
git clone https://github.com/98mxr/GMFSS_Fortuna.git
cd GMFSS_Fortuna
pip install -r requirements.txt
If you are using CUDA 12.x, change cupy-cuda11x to cupy-cuda12x in requirements.txt. Do not install cupy-cuda11x and cupy-cuda12x at the same time!
If you want to validate the results then you need the GMFSS model or union model
Or try this new union model using anime optical flow data fine-tune
If you want to train your own model, you can use our pre-trained model to skip the baseline training process
train_log
folder in the root directory. Then run one of the following commands.python3 inference_video.py --img=demo/ --scale=1.0 --multi=2
python3 inference_video.py --img=demo/ --scale=1.0 --multi=2 --union
train_log
folder as well as dataset in the root directory. Modifying model/dataset.py
is necessary to fit other datasets. Run one of the following commands.python3 train_pg.py
python3 train_upg.py
python3 train_nb.py
This project is supported by SVFI Development Team