ronny3050 / AdvFaces

40 stars 8 forks source link

some confusion about the g_adv_loss in the code #5

Closed 973891422 closed 3 years ago

973891422 commented 3 years ago

In the code: g_adv_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( logits=self.D_fake, labels=tf.ones_like(self.D_fake) ) ) The tf.nn.sigmoid_cross_entropy_with_logits has arg labels = 1 Does this mean L_g_gan = log(D(x_adv)) instead of L_g_gan = log(1-D(x_adv))?In the paper the L_g_gan = E(log(1-D(x_adv))), which confuses me

973891422 commented 3 years ago

Sorry, I've found the answer in the paper Generative Adversarial Nets

Early in learning,when G is poor, D can reject samples with high confidence because they are clearly different from the training data. In this case, log(1−D(G(z))) saturates. Rather than training G to minimize log(1−D(G(z))) we can train G to maximize logD(G(z)).