-
The architecture of the network are similar to CycleGAN when transform one domain to another domain, but the performance is superior to CycleGAN. The main difference is the using of gradient penalty. …
-
there are several implementation on the web, I couldn't converge and I'm expecting a small mistake, would be interesting to test your code.
-
to foster community involvement - some richer sample code beyond MNIST should be tackled.
Generative Adversarial Networks is a hot topic amongst ML - and some sample code using swift should help enco…
-
Hi, I want to use gradient penalty since i wana try improved wgan.
what does `build from source ` mean?
it means I should train it from scratch or build pytorch souce code?
thanks.
-
Hi, I am just beginner of the Gan, I am trying this code now, I find that
in your discriminator model, the final layer:
outputs = Lambda(lambda z: z/2)(outputs) is not in the original pytorch code…
-
Hi,
I'm trying to train on CelebA (cropped and resized to 64x64).
The results in WGAN-GP mode look great, both in quality and diversity, however, when I set the mode to 'wgan', I get very distorte…
-
Hello, let me just say I am a big fan of the code in this repo. I noticed though that the reference implementation you use for the GAN example is from November of 2015. I'd like to suggest that the co…
-
to foster community involvement - some richer sample code beyond MNIST should be tackled.
Generative Adversarial Networks is a hot topic amongst ML - and some sample code using swift should help enco…
-
I saw that on the official github for that paper
https://github.com/igul222/improved_wgan_training/blob/master/gan_mnist.py#L86
does it make some difference ?
-
hello, how can i achieve same results on your official website? The current training results are not continuous in time. Can you help me? what should I do? Looking forward to your reply. thx