Recently, realistic image generation using deep neural networks has become a hot topic in machine learning and computer vision. Such an image can be generated at pixel level by learning from a large collection of images. Learning to generate colorful cartoon images from black-and-white sketches is not only an interesting research problem, but also a useful application in digital entertainment. In this paper, we investigate the sketch-to-image synthesis problem by using conditional generative adversarial networks (cGAN). We propose a model called auto-painter which can automatically generate compatible colors given a sketch. Wasserstein distance is used in training cGAN to overcome model collapse and enable the model converged much better. The new model is not only capable of painting hand-draw sketch with compatible colors, but also allowing users to indicate preferred colors. Experimental results on different sketch datasets show that the auto-painter performs better than other existing image-to-image methods.
i have tried two ways in training, one is training directly on the datasets, another is first using pre-trained model offered by the author then training on the datasets. both of them are tested out all green outputs as follows, one is input picture, another is output. can anybody tell me why. thanks a lot.
i have tried two ways in training, one is training directly on the datasets, another is first using pre-trained model offered by the author then training on the datasets. both of them are tested out all green outputs as follows, one is input picture, another is output. can anybody tell me why. thanks a lot.