Open Minsoo2022 opened 3 years ago
Also, both of the above two images (pt_file['img'], the generated image from pt_file['latent']) are different from the original image.
@Minsoo2022 Hi, thanks for your interest. The provided latent code here is obtained by performing GAN inversion to the testing images. Thus, there could still be some notable differences between the reconstruction and the original image. This means that there is room for improvement in the GAN inversion process itself.
Thanks for your answer. However, as far as I understand, pt_file['img'] is the result of the GAN inversion. Also, pt_file['img'] is slightly different from the original image above. Thus, I think pt_file['img'] and the generated image from pt_file['latent'] should be same. Is it wrong?
What I wonder is that the real image, the GAN inversion image you provide, and the generated image from the GAN inversion latent vector you provide are all different. As far as I understand, the GAN inversion image should be the same as the generated image from the GAN inversion latent vector. That's the reason why I get confused.
Also, can I ask the argument for the GAN inversion? I tried the GAN inversion (rosinality/stylegan2-pytorch/projector.py) myself, but the quality is lower than what you provided. Thanks for your kind reply.
Best regards.
@Minsoo2022 Yes, you are right. pt_file['img'] and the generated image from pt_file['latent'] should be same. It seems there is some inconsistency. Could you please try using truncation=0.7 and see if it works? Besides, as we perform instance-specific training, it is ok if the latent code is not perfect. As the following training process (especially the latent offset predicted by the latent encoder) could learn to fill this gap. I am too busy to carefully check out the details right now. I might be able to reply to you 3-4 days later. Sorry about this.
Thank you for replying while you are busy.
Unfortunately, it still differs when setting truncation=0.7.
I look forward to your reply, and I'll let you know if there's any progress.
Thank you.
Sorry for bothering you, but I'm waiting for your answer.
Sorry about the delay. I will reply to you tomorrow.
I really appreciate your effort.
Hi, I have uploaded my GAN inversion code here: https://drive.google.com/file/d/1pCfnDiHZNnRoEVZ4RhcLfZyPrJgUWEYk/view?usp=sharing You may check it out and perform the inversion to get the latent code :-)
Thank you for your answer. Your answer helped me a lot.
Best regards.
Hi, I appreciate your nice work and sharing code. I suffer from getting the stylegan2 results with synface dataset and pre-trained weights. As shown in the figure, the generated image from synface latent vector (pt_file['latent']) is different from the provided image (pt_file['img']) from .pt file. Can you give me some ideas why it happens?
Best regards.