TachibanaYoshino / AnimeGAN

A Tensorflow implementation of AnimeGAN for fast photo animation ! This is the Open source of the paper 「AnimeGAN: a novel lightweight GAN for photo animation」, which uses the GAN framwork to transform real-world photos into anime images.
4.49k stars 660 forks source link

Apply animation to videos #14

Closed finnkso closed 2 years ago

finnkso commented 4 years ago

Hi, I really like your work and I added a python script for animating videos.

TachibanaYoshino commented 4 years ago

Well, your practice is also worth encouraging. In fact, in recent days, I am also considering applying this work to video animation. Because I want to ensure that the original video and the synthesized video have the same frame rate, I intend to use the previous matlab code to segment and compose the video. Of course, this is not very convenient for python-based algorithm environment. I hope you can share the results of video animation based on your script, because now I don't have a host that can run the algorithm (I used to borrow someone else's host to do research), so I have no way to verify whether your script is reliable. Thank you sincerely!

finnkso commented 4 years ago

Hi, I've uploaded the video results to Youtube (https://youtu.be/ImUyeLJoGUc). The frame rate is fine, but the default bit rate of the output video is a lot higher than the original one and I found no way to control this property in OpenCV. One may need tools like ffmepg to decrease bit rate and scale the file size down. Please let me know if you have further questions. Thanks for sharing this project.

finnkso commented 4 years ago

Yes, the video animation script is for test only. It simply processes the video frame by frame and combines the output fake images. To this point I think it's OK to use the main.py to train since the process is image based.

Sincerely, Finn

TachibanaYoshino commented 4 years ago

Now, I don’t have a VPN to access the YouTube website. I want to verify your script when I borrow a host in the future. I will also upload the generated video to the repository. However, I do not recommend that you use video frames for training, because adjacent frames tend to have very similar content. The pictures of several style datasets are taken from the movie by skipping frames. Different training data have a direct impact on the final result. Thank you for your contribution. I also look forward to verifying your script as soon as possible to do video animation.

finnkso commented 4 years ago

You are quite right. It will be a really great progress to train on videos rather than on each frame separately. The fact is, I have never trained on videos before, thought it would be computationally expensive.

zrongcheng commented 4 years ago

I tested the code with my video, it worked fine.

zrongcheng commented 4 years ago

But my video looked not well, should I do edge smooth?

finnkso commented 4 years ago

But my video looked not well, should I do edge smooth?

The video script only merges the fake images generated by animeGAN. You may need to train with images of various scenes if certain scenes are in your video to make the result look well.

And yes, according to the doc, I think edge smooth is good for training.

tankfly2014 commented 4 years ago

tf version . Your code is tf v2? Author code is tf v1.8