Open shiquanwang opened 6 years ago
To draw sketches of a video is somewhat hard, and requires a lot of efforts. We had a similar idea for our pix2pixHD work, which only works on images: https://youtu.be/OwfwPOozhWg. Needless to say the quality drops by a large margin. I would imagine the video counterpart would be even more difficult. That being said, you're still very welcome to build the demo, now that the code is open sourced. Please let me know if you encounter any problems.
Thanks for the answer and the pix2pix demo gives what it looks like now. Nice work and thanks for sharing.
A flip book might be a good starting point to try that they're well tested to make good animations.
I am curious is it based on CelebA dataset? :)
pix2pixHD is trained on CelebA-HQ, while vid2vid is trained on faceForensics.
@tcwang0509 What kind of input would be required to use the faces pre-trained models on sketches?
I'm exactly planning to convert hand-drawn animations of mine, but the sample code acts upon txt files which contain what I believe is just a list of facial landmarks. Could it work with purely image-based b/w sketches?
Couldn't find any additional info in the paper, so any pointer to part of the code where I could manage this is more than welcome!
From the demo videos, it seems like they are using algorithm generated edge maps, or labelled segmentation ground truth, all with strong connection with the source video content or to say the lines are less distorted.
Can you show some demo with hand drawing sketches will may contain large distortion but still deliver good concept of what it's going to express. And this will be much more meaningful to show that we can generate vivid videos with hand drawing sketches.