-
I have a question on the discriminator construction. I find the final number of channel is "1" via convolutional layer in this implementation. However, I find in others, e.g., "improved wgan", the fin…
-
[Wasserstein GANs](https://arxiv.org/abs/1701.07875) are supposed to fix some of the challenges of GANs.
- No more vanishing gradients!
- No more mode collapse!
- Loss function is more meaningful.
…
-
Why is the output of patch-d the average value of all patches, rather than the cross entropy of biclassification calculated by BCELoss, like the common patch-gan?
-
I wonder if the tips in https://github.com/soumith/ganhacks work under WGAN-GP as well, namely those in Sec. 10. Specifically I would like to confirm if the following is correct:
* D loss is a large…
-
See our paper and Improved Techniques for Training GANs (in the repo)
-
Hi, thanks for sharing your work.
I am trying to follow your work in language generation but the code covers binary MNIST only.
Could you share language version? or give me some hints to implement…
-
hi,
I just start to learn Keras and GAN. Fortunately, I found your codes which inspire me very much. but there is a point I can not figure out. in the original paper the wasserstein loss is E(fake)-E…
-
While running the code in python3 environment, I am getting this error -- RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([1]) and output[0] has a shape of torch.Size([]).
…
-
-
i use the ImageNet 64x64 datasets, and through run the programe, the results is there are many same generate samples in one batch sample, which are in one big picture. Like the pictures that i Upload,…