Closed shenxun520 closed 6 years ago
@shenxun520 Thank you for your concern on our work.
We ever ran the 50000 labeled data CT-GANs with 100k iterations, the result is lower than WGAN-GP. Decreasing the dropout rate will give you a slightly better result (such as 0.2, 0.5, 0.5 as in ResNet with 50000 data, or eliminate one dropout layer). But for that standard CNN structure, the WGAN-GP can not even beat the DCGAN.
As you noticed, the method is somehow sensitive to the network structure you use, however, by setting the same hyper-parameters in our work, the results are satisfactory.
@biuyq Thank you for your reply.
Best.
HI,
I run your code of CT_gan_cifar.py,I evaluate the Inception Score of CT-GAN with 1000 generated images, the best score is 5.65 within the running 10000 iterations. To compare CT-GAN with original WGAN-GP, I also run WGAN-GP 10000 iterations, and evaluate the Inception Score of WGAN-GP with 1000 generated images, the best score of WGAN-GP is 5.96. Such results show that the performance of CT-GAN is lower than WGAN-GP on the standard CNN structure.
So I have two questions to ask: 1, is that the same result that CT-GAN obtain low Inception Score than WGAN-GP ran on standard CNN on your experiments? if not, is that any trick should be noticed? 2, is that CT-GAN outperform WGAN-GP due to ResNet structure? which occurs in your paper Improving the Improved Training of Wasserstein GANs.
Thank you for any reply or response.
Best wishes to you!