eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.2k stars 568 forks source link

About the spilt of train and test dataset #233

Closed MontaEllis closed 2 years ago

MontaEllis commented 2 years ago

Great work! But I have one simple question. You train on FFHQ and test on the test set of CelebA-HQ. What about CelebA-HQ's training set? Did you use it?

yuval-alaluf commented 2 years ago

For the inversion task we did not use the CelebA-HQ train set

MontaEllis commented 2 years ago

Thanks a lot! How you spilt train and test dataset?

yuval-alaluf commented 2 years ago

Using the official data preprocessing script: https://github.com/switchablenorms/CelebAMask-HQ/tree/master/face_parsing

MontaEllis commented 2 years ago

Thanks! So you just use 2824 imgs for test?

yuval-alaluf commented 2 years ago

Yes