Closed unography closed 2 years ago
Hi! for video processing I used this - https://huggingface.co/spaces/nateraw/animegan-v2-for-videos/blob/main/animegan_v2_for_videos.ipynb (huggingface's colab for AnimeGANv2-video) The model is a fastai u-net implementation, trained on paired images with adversarial, perceptual, and pixel losses. I have added some minor tweaks, but all in all the whole pipeline is ancient by today's deep learning standards :D
Thanks for the response! And the results are anything but ancient, love the effect it produces!
Hi, what does the architecture look like? Is it similar to Pix2Pix? And for processing of the video, are you doing anything extra to make sure the frames are consistent?