andrewgordonwilson / bayesgan

Tensorflow code for the Bayesian GAN (https://arxiv.org/abs/1705.09558) (NIPS 2017)
Other
1.02k stars 176 forks source link

Add WGANGP to comparison #4

Closed htt210 closed 6 years ago

htt210 commented 6 years ago

Hi Andrew, I've just done some experiments with WGAN with Gradient Penalty (Improved Training of Wasserstein GANs, Gulrajani et al.) and found that it can converge to a reasonable solution on the synthetic dataset. Although WGANGP does not converge as fast as bayesgan, I think it would be nice if you could add WGANGP to the baselines in your experiments. Here is the output of my (very bad) implementation of WGANGP after 8000 iterations

nx = 100
nz = 10
batchSize = 64
Gconfig = [('Linear', (nz, 1000)), ('ReLU', ()), ('Linear', (1000, nx))]
Dconfig = [('Linear', (nx, 1000)), ('ReLU', ()), ('Linear', (1000, 1))]
optimizer = 'Adam'
optimParams = {'lr': 1e-4, 'betas': (0.5, 0.9)}

iter_8000

ysaatchi commented 6 years ago

This is cool -- a nice extension may be to run WGAN with GP on CIFAR or MNIST and see if you get better semi-supervised results.