This repository implements the VAE in PyTorch, using a pretrained ResNet model as its encoder, and a transposed convolutional network as decoder.
The MNIST database contains 60,000 training images and 10,000 testing images. Each image is saved as a 28x28 matrix.
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class.
The Olivetti faces dataset consists of 10 64x64 images for 40 distinct subjects.
A VAE model contains a pair of encoder and decoder. An encoder compresses an 2D image x into a vector z in a lower dimension space, which is normally called the latent space, while the decoder receives the vectors in latent space, and outputs objects in the same space as the inputs of the encoder. The training goal is to make the composition of encoder and decoder to be "as close to identity as possible". Precisely, the loss function is: , where is the Kullback-Leibler divergence, and is the standard normal distribution. The first term measures how good the reconstruction is, and second term measures how close the normal distribution and q are. After training two applications will be granted. First, the encoder can do dimension reduction. Second, the decoder can be used to reproduce input images, or even generate new images. We shall show the results of our experiments in the end.
We saved labels (y coordinates), resulting latent space (z coordinates), models, and optimizers.
Run plot_latent.ipynb to see the clustering results
Run ResNetVAE_reconstruction.ipynb to reproduce or generate images
Optimizer recordings are convenient for re-training.
With encoder compressing high dimension inputs to low dimension latent space, we can use it to see the clustering of data points.
The decoder reproduces the input images from the latent space. Not only so, it can even generate new images, which are not in the original datasets.