mseitzer / srgan

Pytorch implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"
MIT License
43 stars 14 forks source link

Variations of Discriminative Loss and Generative Adversarial Loss #6

Open rustagiadi95 opened 5 years ago

rustagiadi95 commented 5 years ago

Hi, I have two issues,

1) What is the general trend of the discriminative loss and generative adversarial loss in the super resolution. 2) Is there any correlation specific to these two losses? because both of them use the same loss function (binary cross entropy), but as the arguments passed to these are different, there could be some correlating factor between them.

While training my model, both of these functions return the same value, does this happen generally ? and if not, please suggest a solution.

def gan_loss_disc(self, out_disc_fake, out_disc_label): prob_fake = out_disc_fake prob_label = out_disc_label

    fake_label = self._get_label_var(prob_fake, is_real=False)
    loss_fake = F.binary_cross_entropy(prob_fake, fake_label)

    real_label = self._get_label_var(prob_label, is_real=True)
    loss_real = F.binary_cross_entropy(prob_label, real_label)
    #print(loss_fake , loss_real)
    return loss_fake + loss_real

def gan_loss_gen(self, out_disc_fake): prob_fake = out_disc_fake real_label = self._get_label_var(prob_fake, is_real=True) loss_fake = F.binary_cross_entropy(prob_fake, real_label)

print(loss_fake)

    return loss_fake