eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.2k stars 568 forks source link

Train and test data split file. #242

Closed zwb0 closed 2 years ago

zwb0 commented 2 years ago

Hi, Thanks for your great work. 1) I'm wondering for the super-resolution task, how do you split the train and test file on CelebA-HQ dataset? Since I checked the CelebA-HQ-to-CelebA-mapping.txt, where there are 27176 images for the training and validation dataset and only 2824 images for the test set. Am I correct to use this split file for SR? In the supplementary, I noticed that you used 6000 images for testing, which I don't know how to get. Could you share the split file? 2) Do you have any quantitative results for the SR task? Thanks in advance!

yuval-alaluf commented 2 years ago

We use the official code for splitting the CelebA-HQ dataset (https://github.com/switchablenorms/CelebAMask-HQ/tree/master/face_parsing). The split is done into train-test-validation where there are 24,000 images used for training and 2,824 images used from testing. The validation set contains 3,176 images, which explains the difference you got (27,176). The super-resolution task was trained on the 24,000 training images. I don't have the quantitative results for the super-resolution task but this shouldn't be too hard to compute given the trained model.

zwb0 commented 2 years ago

Thanks a Lot!