eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.19k stars 570 forks source link

How to finetune the encoder pretrained on ffhq by other face dataset? #275

Closed 3maoyap closed 2 years ago

yuval-alaluf commented 2 years ago

After you have set up your new dataset, you can simply run the training script and add the flag --checkpoint_path to the pretrained encoder. For details on how to prepare your data for training see here: https://github.com/eladrich/pixel2style2pixel#preparing-your-data

li-car-fei commented 1 year ago

After you have set up your new dataset, you can simply run the training script and add the flag --checkpoint_path to the pretrained encoder. For details on how to prepare your data for training see here: https://github.com/eladrich/pixel2style2pixel#preparing-your-data

If my data set is only a few, maybe about ten, can I finetune the psp model so that it can complete the inversion ?