batsa003 / salgan

57 stars 13 forks source link

main.py #8

Open ghost opened 6 years ago

ghost commented 6 years ago

I wonder why in the main.py , you use the # when train discriminator

fake_map = generator(batch_img)

inp_d = torch.cat((batch_img,fake_map),1)

outputs = discriminator(inp_d)

d_fake_loss = loss_function(outputs, fake_labels)

print('D_fake_loss = ', d_fake_loss.data[0])

IceClear commented 5 years ago

I also wonder why he doesn't freeze generator's parameters when training the discriminator and the discriminator's parameters are not freezed when training the generator. Do you have any idea?