zhangzjn / APB2Face

Official pytorch implementation for "APB2Face: Audio-guided face reenactment with auxiliary pose and blink signals", ICASSP'20
MIT License
63 stars 19 forks source link

loss function #7

Closed QUTGXX closed 4 years ago

QUTGXX commented 4 years ago

Hi, I am fascinated by your paper. I have some issues with the loss function in audio2landmark and l2face. In the first stage, you calculate the loss between fake_A and land_A2. And in the second stage, you utilise the similar method that concatenates realA and fakeB.I did not understand why you concatenate the different images that seem not related. Could you please help me address that?

zhangzjn commented 4 years ago

Real/fake image pair and real/real image pair form adversarial samples to train the discriminator, you may refer to GAN and Pix2Pix for more details.

wuxiaolianggit commented 4 years ago

你好,大神,请问一下关于您的loss设计的问题,self.loss_G_A = self.criterionGAN(self.netD(self.fake_A, self.land_A2), True),其中self.land_A2是随机选择的,为什么不用对应的self.land_A1呢?即改成self.loss_G_A = self.criterionGAN(self.netD(self.fake_A, self.land_A1), True),使用self.land_A2是为了增加模型的鲁棒性吗? 恳请大神回复啊。 @zhangzjn

QUTGXX commented 4 years ago

Real/fake image pair and real/real image pair form adversarial samples to train the discriminator, you may refer to GAN and Pix2Pix for more details.

Thanks for your reply. I got it, it takes a similar method like cGAN.