-
Hi Andrew,
I've just done some experiments with WGAN with Gradient Penalty (Improved Training of Wasserstein GANs, Gulrajani et al.) and found that it can converge to a reasonable solution on the syn…
-
The loss function of critic is not valid.
It have to use WGAN-GP, but it utilize only improved-WGAN loss now.
Have to be fixed.
-
Default configurations may diverge.
When a configuration is created it is set to random values within a predefined range. That range has some bad areas at the moment.
Until this is fixed see `e…
-
I'm struggling to understand why the Wasserstein Loss function is correct, as seen below:
```
def wasserstein_loss(y_true, y_pred):
return K.mean(y_true * y_pred)
```
The comment in the cod…
-
I came into a rather common use case that would require me to reimplement BatchNorm or modify it since the required functionality does not exist.
Imagine using a GAN loss that requires a 2nd order …
-
Hi,
I work with your examples "Improved-WGAN.py".
I want to get gradients by using tf.gradients.
It works when I used LayerNorm for discriminator.
But, it does not work when I change LayerNorm…
-
@jiamings In your `wgan-v2.py`, you write
`ddx = tf.sqrt(tf.reduce_sum(tf.square(ddx), axis=1))`
this will result a shape `(50, 64, 64, 3)` to `(50, 64, 3)` in tensorflow 1.6.0.
If I understand …
-
Hi I have been messing around with the Repo and I have lately been experimenting with switching out the relu activations in the gan_cifar.py with elu activations, however even with varying the lambda …
-
Traceback (most recent call last):
File "improved_wgan.py", line 251, in
wgan = ImprovedWGAN()
File "improved_wgan.py", line 84, in __init__
loss_weights=[1, 1, 10])
File "/usr/loc…
ghost updated
6 years ago
-
I've tried to combine improved WGAN with pix2pix, but I got a very unstable training even got exploded ( NaN). I have also implemented original WGAN with pix2pix, and it works well. It seems like the …