-
Hi...
I was wondering if adding Wasserstein distance to the code would help the GAN to stabilize and give better results... from what I've read, it has really nice properties and implementing it sh…
-
**Submitting author:** @https://github.com/josemanuel22 (José Manuel de Frutos)
**Repository:** https://github.com/josemanuel22/ISL
**Branch with paper.md** (empty if default branch):
**Version:** v0…
-
https://arxiv.org/pdf/1703.10717.pdf
We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Ne…
leo-p updated
7 years ago
-
Your spectral normalization normalizes the spectral norm of the weight matrix W so that it satisfies the Lipschitz constraint = 1. So we consider whether to combine the wasserstein gan with spectral n…
-
### 論文へのリンク
[[arXiv:1801.04406] Which Training Methods for GANs do actually Converge?](https://arxiv.org/abs/1801.04406)
### 著者・所属機関
Lars Mescheder, Andreas Geiger, Sebastian Nowozin
- MPI…
-
It seems that in the original paper the output of the discriminator(d_loss) is an estimate of EM distance, so should it be positive? The curve of d_loss shows it tends to converge but the negative nu…
-
Thanks to share easy-to-follow code.
I am currently applying WGAN to learning text distribution.
Here is questions regarding WGAN.
Question1. In Figure 3, the loss of MLP and DCGAN seems compar…
-
Hello!
Thanks for the great tool.
But there is one question. Is it possible to implement this idea (https://arxiv.org/pdf/1611.07004v1.pdf) with the help of your library?
-
# Abstract
Generative Adversarial Networks(GAN)은 데이터 생성에서 뛰어난 모습을 보이고 있다. 많은 영역에서 쓰이고 있지만 여전히 안정적인 학습에는 어려움이 따른다. 문제점으로는 Nash-equilibrium, internal covariate shift, mode collapse, vanishing gradient,…
-
Hey @DegardinBruno
Great work. Thanks for sharing your code!
While training on the NTU-120 the generator loss is exploding. We only changed the batchsize from 32 to 380
```
[Epoch 297/1200…