jiangyzy / GOAE

[ICCV 2023] Official implementation of "Make Encoder Great Again in 3D GAN Inversion through Geometry and Occlusion-Aware Encoding" in International Conference on Computer Vision (ICCV) 2023.
83 stars 8 forks source link

About train_stagex.py and discriminator models. #24

Open LPHFAQ opened 3 weeks ago

LPHFAQ commented 3 weeks ago

I'm sorry that I raised some foolish questions in last issue due to my carelessness when reading your paper and my superficial comprehension about GAN. Now although I successfully run your train code (train_stagex.py) and I referred to some papers about GAN, there are still some parts of the train code making me feel confused. About opts.w_discriminator_lambda: The 'discriminator_loss' and 'discriminator_r1_loss' seems to be the latent discriminator loss(formula (2) in paper), and the loss_dict['encoder_discriminator_loss'] seems to be the encoder loss(formula (3) in paper). When opts.w_discriminator_lambda>0, these two losses are enabled, and in your paper's appendix these two losses are dropped in stage2. However in train.sh the opts.w_discriminator_lambda is not set as 0, and its default value is 0.1(>0), this will enable the two loss in 2nd train stage, which is conflict with your paper's description. If my comprehension is right, should 'opts.w_discriminator_lambda' be set as 0 in train_stage2.py? image image image image image image

LPHFAQ commented 3 weeks ago

And there is another adv loss which is not mentioned in your GOAE paper, and it's param 'opts.adv' is also not configured in train.sh(opts.adv's default value is 0), is adv loss used in train_stage2.py? If used, what's the value of opts.adv you used? image image

LPHFAQ commented 3 weeks ago

My third question is would please share your trained discriminator model file(.pt)? I'm so sorry to bother you again, but I payed a lot attention to your excellent work and spend much time running the train code you released successfully and now I plan to do some experiences based GOAE. But the questions I raised above still make me confused, and it will help me a lot if the discriminator model file was still reserved in your device and could be shared within this git repository. I'd appreciate it a lot if you could reply my issue! image image image