eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.19k stars 570 forks source link

About celebs_seg_to_face #307

Closed diamond0910 closed 1 year ago

diamond0910 commented 1 year ago

Thank you for your great work.

In the paper, you show you can encode the mask to the first seven w+ latant, and you random latent for the rest layers. But I see your code. The code directly encodes the mask to the whole w+ latent. Why? This will lead that the style-mixing operation generates different masks.

image

Thank you.

yuval-alaluf commented 1 year ago

I am not sure I fully understand the question. When we encode the segmentation map into StyleGAN's latent space, we are in a sense mapping the segmentation map to an encoding of a real face image. The output of the task is a real image, not a segmentation map.