1) What is the general trend of the discriminative loss and generative adversarial loss in the super resolution.
2) Is there any correlation specific to these two losses? because both of them use the same loss function (binary cross entropy), but as the arguments passed to these are different, there could be some correlating factor between them.
While training my model, both of these functions return the same value, does this happen generally ? and if not, please suggest a solution.
Hi, I have two issues,
1) What is the general trend of the discriminative loss and generative adversarial loss in the super resolution. 2) Is there any correlation specific to these two losses? because both of them use the same loss function (binary cross entropy), but as the arguments passed to these are different, there could be some correlating factor between them.
While training my model, both of these functions return the same value, does this happen generally ? and if not, please suggest a solution.
def gan_loss_disc(self, out_disc_fake, out_disc_label): prob_fake = out_disc_fake prob_label = out_disc_label
def gan_loss_gen(self, out_disc_fake): prob_fake = out_disc_fake real_label = self._get_label_var(prob_fake, is_real=True) loss_fake = F.binary_cross_entropy(prob_fake, real_label)
print(loss_fake)