Closed primecai closed 3 years ago
Closed. The optimizers were defined as follows: optimizer_G = torch.optim.Adam(generator_ddp.parameters(), lr=metadata['gen_lr'], betas=metadata['betas'], weight_decay=metadata['weight_decay']) optimizer_D = torch.optim.Adam(discriminator_ddp.parameters(), lr=metadata['disc_lr'], betas=metadata['betas'], weight_decay=metadata['weight_decay'])
So optimizing the generator will not update the discriminator, I overlooked this.
Hi,
Many thanks for the great work and releasing the code. If I'm not mistaken, you are not freezing the discriminator while training the generator: https://github.com/marcoamonteiro/pi-GAN/blob/0800af72b8a9371b2b62fec2ae69c32994bb802f/train.py#L264 I'm wondering if there is a reason for this?
Many thanks.