eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.19k stars 570 forks source link

Is the "Sketch to Image" dataset available? #214

Closed nonoesp closed 2 years ago

nonoesp commented 2 years ago

Hi, there.

Thanks so much for open-sourcing this work!

[1] I'm curious if the sketch-to-image dataset of CelebA-HQ for image synthesis from sketches is available to be downloaded — I'd love to see a sample dataset to see how to prepare my own.

[2] If it's not available, is there anywhere you explain how to generate this kind of dataset?

[3] How many image pairs would be the minimum recommended train pSp?

Cheers! Nono

yuval-alaluf commented 2 years ago

You can prepare the sketch dataset from the CelebA-HQ dataset by using the genenerate_sketch_data.py script: https://github.com/eladrich/pixel2style2pixel/blob/master/scripts/generate_sketch_data.py

Regarding the minimum number of image pairs to train pSp: it is hard to say exactly. We used the ~24,000 images from CelebA-HQ. I assume you could use far less and still get some good results.