Open ghost opened 6 years ago
I wonder why in the main.py , you use the # when train discriminator
I also wonder why he doesn't freeze generator's parameters when training the discriminator and the discriminator's parameters are not freezed when training the generator. Do you have any idea?
I wonder why in the main.py , you use the # when train discriminator
fake_map = generator(batch_img)
inp_d = torch.cat((batch_img,fake_map),1)
outputs = discriminator(inp_d)
d_fake_loss = loss_function(outputs, fake_labels)
print('D_fake_loss = ', d_fake_loss.data[0])