eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.19k stars 570 forks source link

Sketch edges are not getting honored 100% while generating images from sketches #211

Closed abhisheklalwani closed 2 years ago

abhisheklalwani commented 2 years ago

Hello, Great Work first of all! I had an issue while I was trying to use the code in the repo for sketch-image transfer. I can see that the edges in even the most simplistic of sketches are not getting translated directly in the output. 118200 I have attached a sample image for reference. I have already tried adding a custom edge-based loss which is simple an MSE difference between the edge-map of the generated image and the edge-map of the target image but the performance did not improve significantly. Is there anything I can do to make sure that the edges are translated correctly to the generated image?

mostar39 commented 2 years ago

Hello ! I have a question, just in case it's not a face image right now, I'm also using a non-face image. Although I didn't learn a few times, the face kept coming out as an output, how could you come out with a face like that without a face? Does it happen automatically if you keep learning? I wonder!

yuval-alaluf commented 2 years ago

Hello, Great Work first of all! I had an issue while I was trying to use the code in the repo for sketch-image transfer. I can see that the edges in even the most simplistic of sketches are not getting translated directly in the output. 118200 I have attached a sample image for reference. I have already tried adding a custom edge-based loss which is simple an MSE difference between the edge-map of the generated image and the edge-map of the target image but the performance did not improve significantly. Is there anything I can do to make sure that the edges are translated correctly to the generated image?

Training on other domains can be challenging at times. As a sanity check, have you tried overfitting pSp to a single or a small number of images? Have you tried inverting using some optimization technique? The idea of the custom edge-based loss is a good direction. What batch size are you running on?

yuval-alaluf commented 2 years ago

Hello ! I have a question, just in case it's not a face image right now, I'm also using a non-face image. Although I didn't learn a few times, the face kept coming out as an output, how could you come out with a face like that without a face? Does it happen automatically if you keep learning? I wonder!

To work with pSp you need a pre-trained StyleGAN generator in your domain. I assume you used the faces generator provided in this repo and that is why you saw faces being outputted by the model even though you have non-face data.

abhisheklalwani commented 2 years ago

Hello, Great Work first of all! I had an issue while I was trying to use the code in the repo for sketch-image transfer. I can see that the edges in even the most simplistic of sketches are not getting translated directly in the output. 118200 I have attached a sample image for reference. I have already tried adding a custom edge-based loss which is simple an MSE difference between the edge-map of the generated image and the edge-map of the target image but the performance did not improve significantly. Is there anything I can do to make sure that the edges are translated correctly to the generated image?

Training on other domains can be challenging at times. As a sanity check, have you tried overfitting pSp to a single or a small number of images? Have you tried inverting using some optimization technique? The idea of the custom edge-based loss is a good direction. What batch size are you running on?

Overfitting PSP seems like a good sanity check. Let me try that. By optimization technique, I assume you mean searching the latent space of the generator for the closest possible image right? The batch size is 4.

yuval-alaluf commented 2 years ago

Correct. By optimization I mean performing a per image, latent vector optimization like they do in StyleGAN2

mostar39 commented 2 years ago

Hello ! I have a question, just in case it's not a face image right now, I'm also using a non-face image. Although I didn't learn a few times, the face kept coming out as an output, how could you come out with a face like that without a face? Does it happen automatically if you keep learning? I wonder!

To work with pSp you need a pre-trained StyleGAN generator in your domain. I assume you used the faces generator provided in this repo and that is why you saw faces being outputted by the model even though you have non-face data.


So how do we use data other than faces?

Do I need to train my data with Style Gan first, and then train pixel2style2pixel based on the obtained pt or pth?

yuval-alaluf commented 2 years ago

Correct you need to first train a SG generator on your domain