wty-ustc / HairCLIP

[CVPR 2022] HairCLIP: Design Your Hair by Text and Reference Image
GNU Lesser General Public License v2.1
524 stars 68 forks source link

About the training details. #6

Closed bb12346 closed 2 years ago

bb12346 commented 2 years ago

Thank you for your great project!

In this paper, you said “We train and evaluate our hair mapper on the CelebA-HQ dataset. Since we use e4e [43] as our inversion encoder, we follow its division of the training set and test set.” However, I found that e4e used the FFHQ dataset for training and the CelebA-HQ test dataset for evaluation. Hence, I feel confused. My question is that how to split the training and test datasets on the CelebA-HQ dataset?

wty-ustc commented 2 years ago

e4e is an inversion method that predicts latent codes that are more suitable for editing. e4e is trained using FFHQ and tested using CelebA-HQ which is exactly correct. We followed the StyleCLIP method to invert the latent codes of CelebA-HQ with e4e and split them into training and testing parts, in the same way as StyleCLIP. You can download latent codes from train set, test set.

bb12346 commented 2 years ago

Thank you for your reply.