Closed abhisheklalwani closed 3 years ago
Hello ! I have a question, just in case it's not a face image right now, I'm also using a non-face image. Although I didn't learn a few times, the face kept coming out as an output, how could you come out with a face like that without a face? Does it happen automatically if you keep learning? I wonder!
Hello, Great Work first of all! I had an issue while I was trying to use the code in the repo for sketch-image transfer. I can see that the edges in even the most simplistic of sketches are not getting translated directly in the output. I have attached a sample image for reference. I have already tried adding a custom edge-based loss which is simple an MSE difference between the edge-map of the generated image and the edge-map of the target image but the performance did not improve significantly. Is there anything I can do to make sure that the edges are translated correctly to the generated image?
Training on other domains can be challenging at times. As a sanity check, have you tried overfitting pSp to a single or a small number of images? Have you tried inverting using some optimization technique? The idea of the custom edge-based loss is a good direction. What batch size are you running on?
Hello ! I have a question, just in case it's not a face image right now, I'm also using a non-face image. Although I didn't learn a few times, the face kept coming out as an output, how could you come out with a face like that without a face? Does it happen automatically if you keep learning? I wonder!
To work with pSp you need a pre-trained StyleGAN generator in your domain. I assume you used the faces generator provided in this repo and that is why you saw faces being outputted by the model even though you have non-face data.
Hello, Great Work first of all! I had an issue while I was trying to use the code in the repo for sketch-image transfer. I can see that the edges in even the most simplistic of sketches are not getting translated directly in the output. I have attached a sample image for reference. I have already tried adding a custom edge-based loss which is simple an MSE difference between the edge-map of the generated image and the edge-map of the target image but the performance did not improve significantly. Is there anything I can do to make sure that the edges are translated correctly to the generated image?
Training on other domains can be challenging at times. As a sanity check, have you tried overfitting pSp to a single or a small number of images? Have you tried inverting using some optimization technique? The idea of the custom edge-based loss is a good direction. What batch size are you running on?
Overfitting PSP seems like a good sanity check. Let me try that. By optimization technique, I assume you mean searching the latent space of the generator for the closest possible image right? The batch size is 4.
Correct. By optimization I mean performing a per image, latent vector optimization like they do in StyleGAN2
Hello ! I have a question, just in case it's not a face image right now, I'm also using a non-face image. Although I didn't learn a few times, the face kept coming out as an output, how could you come out with a face like that without a face? Does it happen automatically if you keep learning? I wonder!
To work with pSp you need a pre-trained StyleGAN generator in your domain. I assume you used the faces generator provided in this repo and that is why you saw faces being outputted by the model even though you have non-face data.
So how do we use data other than faces?
Do I need to train my data with Style Gan first, and then train pixel2style2pixel based on the obtained pt or pth?
Correct you need to first train a SG generator on your domain
Hello, Great Work first of all! I had an issue while I was trying to use the code in the repo for sketch-image transfer. I can see that the edges in even the most simplistic of sketches are not getting translated directly in the output. I have attached a sample image for reference. I have already tried adding a custom edge-based loss which is simple an MSE difference between the edge-map of the generated image and the edge-map of the target image but the performance did not improve significantly. Is there anything I can do to make sure that the edges are translated correctly to the generated image?