igul222 / improved_wgan_training

Code for reproducing experiments in "Improved Training of Wasserstein GANs"
MIT License
2.35k stars 668 forks source link

This code is outdated seriously #80

Open ylmao opened 6 years ago

ylmao commented 6 years ago

It seems that the authors of this paper no longer maintain their code. Could anyone share an updated version of this code? It will help us a lot. Thanks!!!

zhulingchen commented 5 years ago

Hi ylmao,

I implemented my version of Info-Wasserstein-GAN-Gradient Penalty-Consistency Term (Info-WGAN-GP-CT) in my repository: https://github.com/zhulingchen/deep-generative-models with Keras and TensorFlow (tested under the platform os Ubuntu 18.04 LTS, CUDA 9.2, cuDNN 7.2.1, Native compiled TensorFlow 1.10, Keras 2.2.2).

Note that "Consistency Term" comes from a recent ICLR 2018 paper: X. Wei, Z. Liu, L. Wang, and B. Gong, "Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect", International Conference on Learning Representations (ICLR), 2018..

Thank you!

s-hong-s commented 5 years ago

I implemented my version of Info-Wasserstein-GAN-Gradient Penalty-Consistency Term (Info-WGAN-GP-CT) in my repository: https://github.com/zhulingchen/deep-generative-models with Keras and TensorFlow (tested under the platform os Ubuntu 18.04 LTS, CUDA 9.2, cuDNN 7.2.1, Native compiled TensorFlow 1.10, Keras 2.2.2).

Hello. The provided GitHub link is broken. Are you able to upload the code used for your paper again?

ylmao commented 5 years ago

Hi ylmao,

I implemented my version of Info-Wasserstein-GAN-Gradient Penalty-Consistency Term (Info-WGAN-GP-CT) in my repository: https://github.com/zhulingchen/deep-generative-models with Keras and TensorFlow (tested under the platform os Ubuntu 18.04 LTS, CUDA 9.2, cuDNN 7.2.1, Native compiled TensorFlow 1.10, Keras 2.2.2).

Note that "Consistency Term" comes from a recent ICLR 2018 paper: X. Wei, Z. Liu, L. Wang, and B. Gong, "Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect", International Conference on Learning Representations (ICLR), 2018..

Thank you!

Thanks!