eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.16k stars 572 forks source link

How do I use this project on my own dataset? #78

Closed luoling1993 closed 3 years ago

luoling1993 commented 3 years ago

I have a image2image tasks,and I have the image pairs, nne has a watermark, the other one doesn't. I want to use this project to train a model to remove the watermark.How do I use this project?

yuval-alaluf commented 3 years ago

Please see the README section on preparing your own data: https://github.com/eladrich/pixel2style2pixel#preparing-your-data After defining your data paths, transforms, and experiment type, you can train your image to image task. We also provide example train commands which you can rely on.

luoling1993 commented 3 years ago

When I read the README, I know how to preparing my data. But there is some confusion about pretrained model.

  1. Because of my task is image2image, so id_lambda=0
  2. pSp:encoder and decoder, I found that ir_se50 was used by default.
    • Should I use this pretrained model in my task?
    • If not, how do I pretrained my model?
yuval-alaluf commented 3 years ago

Regarding the id_lambda, if you're not working on facial images, then you can try setting id_lambda=0. Regarding the pre-trained model, please see Issue #76 .

luoling1993 commented 3 years ago

When I read the issue#76 and the code, do I understand you correctly.

  1. encoder, default ir_se50, trained by facical datasets. Because the params will be updated while training, I can use this pretrained model in my tasks, or replaced with a Resnet-IR trained based on imagenet.
  2. decoder, default styleGan2, trained by FFHQ datasets, and NOT update default. I can set train_decoder=True to fine-tune the decoder
yuval-alaluf commented 3 years ago

Yes you understood correctly

luoling1993 commented 3 years ago

Thank you very much for your answers!