eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.2k stars 568 forks source link

Looking for the right approach for training a "Sketch to Anime" pSp model #193

Closed WyattAutomation closed 3 years ago

WyattAutomation commented 3 years ago

Hey there--as the topic suggests I have a couple of questions:

Would it be possible to train the Sketch2Face implementation to take ambiguous sketches as input and generate high quality images of Anime characters as output?

I have seen mention of the need to have "paired data", which I think I may have. I have roughly 190k pencil sketches of Anime characters and a corresponding RGB image for each of those with the same name. The sketch is identical except for there is obviously no color, significantly less detail, and no background (just the character that's in the RGB image of the same name).

I also have a StyleGAN2 model that can generate full body images of anime characters, which is the target domain to generate from the sketches.

Am I missing anything prior to pursuing your documentation and attempting to train and get these results? Thanks in advance!

WyattAutomation commented 3 years ago

I'm going to close this and just attempt to follow the docs--seems better suited as a discussion item anyway. I'll let you know how it goes.

onefish51 commented 2 years ago

how do you get " 190k pencil sketches of Anime characters and a corresponding RGB image " ?