Open HelenMao opened 4 years ago
I think the optimization may have some problems.
optimD = optim.Adam([{'params': discriminator.parameters()}, {'params': netD.parameters()}], lr=params['learning_rate'], betas=(params['beta1'], params['beta2']))
optimG = optim.Adam([{'params': netG.parameters()}, {'params': netQ.parameters()}], lr=params['learning_rate'], betas=(params['beta1'], params['beta2']))
when optimizing Q, the shared layers are not optimized
@HelenMao Hi, does this pull request of another pytorch version of infoGAN fix this? https://github.com/pianomania/infoGAN-pytorch/pull/1
Hi, thanks for your great implementation. I run the model on CelebA dataset. However, when I test the model, I find if the categorical codes are fixed, different noise vectors introduce the same results. How do you get the CelebA results shown in the GitHub page?