eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.2k stars 568 forks source link

I am curious about how to configure Dataset and how to use sketch. #186

Closed justice-hwan closed 3 years ago

justice-hwan commented 3 years ago

1) What is the difference between input and target in script/train.py ? As the train progresses, the output image is getting closer to the target, but I do not know the role of the input.

2) When using sketch to image, can I put ./pretrained/sketch~.pt in the --stylegan_weights option?

justice-hwan commented 3 years ago

In addition, can it be used on other body parts other than the face?

yuval-alaluf commented 3 years ago
  1. What is the difference between input and target in script/train.py ?

In the reconstruction task, the input and target are the same image since you are trying to reconstruct the input image. In other image-to-image tasks, the input represents an image in your source domain (e.g., sketch image) while the target represents an image in your target domain (face image).

  1. When using sketch to image, can I put ./pretrained/sketch~.pt in the --stylegan_weights option?

I am not sure I understand what you're referring to with ./pretrained/sketch~.pt. But the parameter --stylegan_weights is the path of the pre-trained StyleGAN generator that will be used for generating images during training. If you are trying to translate between sketches to real faces, then the stylegan_weights path should point to a path of a StyleGAN generator for realistic faces (e.g., the StyleGAN trained on FFHQ).

In addition, can it be used on other body parts other than the face?

pSp assumes that you have a pre-trained StyleGAN generator for your target domain. If you have a generator able to generate your body parts, then sure pSp can be used for other body parts.