Open vijaysamula opened 7 months ago
You need to train the StyleGAN2 on your own data. Once you have completed the training of the StyleGAN2 on your own data (freeze these weights), then you can train the pSp for this StyleGAN2. pSp is only used to help condition the synthesis of images from StyleGAN2, they need to be paired.
Hi, Thanks for the information. I have trained pSp, But the results are worst, Can you help me there? I trained pSp from scratch.
Are you using the original pSp or the one from this repo? If you are using the one from this repo, check that you are using only the styleGAN2 generated images to train the encoder in latent space instead of in pixel space (this is what I used in the paper; I added the correct flags to the README now to explicitly reflect this). If you insist on training with losses in pixel space (what the original pSp did), I would recommend using a higher weightage for the LPIPS loss compared to the L2 loss
Hi, Sorry for the late reply. I am using your repo. I didn't understand properly the encoder training in README. I have trained the encoder this way. (i) I trained the source and target domain combined using StyleGAN. (ii) Then Generated the images using StyleGAN using generator.py with trained weights. (iii) Then trained the Encoder using Generated images. But the training results are still not best.
Did you try with the additional new flags? (the flags to turn off the pixel-wise losses)
No, I didn't try new flags. But i am using latent spaces and i will train with flags again.
Hi Yue Linn Chong, Your Repo is great. I have couple of doubts to train on my data with target dataset as different sugar beets dataset. source dataset is UGV17 and target is some other UGV sugar beet dataset. (i) Should I train the PsPNet with both source and target images, without training styleGAN2? (ii) I have trained the PSPNet, without training styleGAN2. But the generated images are worst.