Implement Coupled Generative Adversarial Networks, [NIPS 2016]
This implementation is a little bit different from the original caffe code. Basically, I follow the model architecture design of DCGAN.
CoGAN can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one.
The following figure is the result showed in paper:
The following image is the model architecture referred in the paper:
Again: this repo isn't follow the model architecture in the paper currently
First you have to clone this repo:
$ git clone https://github.com/andrewliao11/CoGAN-tensorflow.git
Download the data:
This step will automatically download the data under the current folder.
$ python download.py mnist
Preprocess(invert) the data:
$ python invert.py
Train your CoGAN:
$ python main.py --is_train True
During the training process, you can see the average loss of the generators and the discriminators, which can hellp your debugging. After training, it will save some sample to the ./samples/top and ./samples/bot
, respectively.
To visualize the the whole training process, you can use Tensorboard:
tensorboard --logdir=logs
model in 1st epoch
model in 5th epoch
model in 24th epoch
We can see that without paired infomation, the network can generate two different images with the same high-level concepts.
Note: To avoid the fast convergence of D (discriminator) network, G (generator) network is updated twice for each D network update, which differs from original paper.
This code is heavily built on these repo: