hwalsuklee / tensorflow-mnist-VAE

Tensorflow implementation of variational auto-encoder for MNIST
492 stars 182 forks source link
autoencoder dae denoising-autoencoders mnist tensorflow vae variational-autoencoder

Variational Auto-Encoder for MNIST

An implementation of variational auto-encoder (VAE) for MNIST descripbed in the paper:

Results

Reproduce

Well trained VAE must be able to reproduce input image.
Figure 5 in the paper shows reproduce performance of learned generative models for different dimensionalities.
The following results can be reproduced with command:

python run_main.py --dim_z <each value> --num_epochs 60
Input image 2-D latent space 5-D latent space 10-D latent space 20-D latent space

Denoising

When training, salt & pepper noise is added to input image, so that VAE can reduce noise and restore original input image.
The following results can be reproduced with command:

python run_main.py --dim_z 20 --add_noise True --num_epochs 40
Original input image Input image with noise Restored image via VAE

Learned MNIST manifold

Visualizations of learned data manifold for generative models with 2-dim. latent space are given in Figure. 4 in the paper.
The following results can be reproduced with command:

python run_main.py --dim_z 2 --num_epochs 60 --PMLR True
Learned MNIST manifold Distribution of labeled data

Usage

Prerequisites

  1. Tensorflow
  2. Python packages : numpy, scipy, PIL(or Pillow), matplotlib

Command

python run_main.py --dim_z <latent vector dimension>

Example: python run_main.py --dim_z 20

Arguments

Required :

Optional :

References

The implementation is based on the projects:
[1] https://github.com/oduerr/dl_tutorial/tree/master/tensorflow/vae
[2] https://github.com/fastforwardlabs/vae-tf/tree/master
[3] https://github.com/kvfrans/variational-autoencoder
[4] https://github.com/altosaar/vae

Acknowledgements

This implementation has been tested with Tensorflow r0.12 on Windows 10.