-
Hi! Awesome work :)
I've noticed that in all files (but no in `gan_toy.py`) you use `interpolates = real_data + (alpha*differences)` but not `interpolates = (1-alpha)*real_data + (alpha*differences…
-
The paper link is loss-sensitive gan, and the code link is improved wgan paper.
-
I could be wrong, but it seems like the calculation for the gradient penalty is not the same across different code examples in this repo. In the paper, I believe the calculation is shown in line 6 in …
wronk updated
7 years ago
-
When I try to apply this loss function to my model, modified from DCGAN, I got NAN loss, I wonder will this modified vanilla function will lead to this issue? Thanks!
-
Could you release your source code of conditional WGAN-GP for cifar 10 (as shown in Figure 5 in your updated v2 manuscript)? Thanks.
-
### Environment info
Operating System: Ubuntu 14.04
Installed version of CUDA and cuDNN:
(please attach the output of `ls -l /path/to/cuda/lib/libcud*`):
Cuda 8.0, tensorflow 1.0
If you …
-
I think our current model with `TensorGraph` can't nicely support GANs. In order to train GANs, you need to train both a discriminator `D` and a generator `G`. The training of these two models is thre…
-
Hi, do you plan to provide a pytorch implementation of [the recent paper](https://arxiv.org/abs/1704.00028) on "Improved Training of Wasserstein GANs"?
Is there an easy way to compute the gradient w.…
-
https://github.com/igul222/improved_wgan_training/blob/master/gan_language.py#L89
Remove "_"
-
Hey @igul222, thanks again for releasing this code. Has really helped me experiment with WGANs. I noticed in your paper protocol that you do NOT decrease the learning rate over 200k iterations.
I h…