Closed WyattAutomation closed 3 years ago
I'm going to close this and just attempt to follow the docs--seems better suited as a discussion item anyway. I'll let you know how it goes.
how do you get " 190k pencil sketches of Anime characters and a corresponding RGB image " ?
Hey there--as the topic suggests I have a couple of questions:
Would it be possible to train the Sketch2Face implementation to take ambiguous sketches as input and generate high quality images of Anime characters as output?
I have seen mention of the need to have "paired data", which I think I may have. I have roughly 190k pencil sketches of Anime characters and a corresponding RGB image for each of those with the same name. The sketch is identical except for there is obviously no color, significantly less detail, and no background (just the character that's in the RGB image of the same name).
I also have a StyleGAN2 model that can generate full body images of anime characters, which is the target domain to generate from the sketches.
Am I missing anything prior to pursuing your documentation and attempting to train and get these results? Thanks in advance!