MoritzWag / Representation-Learning

Representation Learning of Image Data with VAE.
2 stars 0 forks source link

Representation-Learning

Representation Learning of Image Data with VAE. Alexander Piehler, Moritz Wagner

Introduction

This Github Repository is developed by Moritz Wagner and Alexander Piehler in collaboration with Adidas. This project stresses the suitability of Variational Autoencoders for representation learning of image data. The main goal was to establish a codebase that allows for reproducing and tracking experiments via mlflow. The world of Variational Autoencoders is quite large and therefore, we restricted ourselves to the ones we believed that would perform best. To further benchmark, we also included PCA and Autoencoders. The following models were considered:

├── VAE
├── beta-VAE
├── InfoVAE
├── GaussMixVAE
├── DIP-VAE
├── PCA
├── Autoencoder

For each model, we found some valid hyperparameter configurations that can be considered in the folder configs/.

Code Structure

This framework is structured as follows:

├── configs
├── data
├── experiments
├── library
├── playground

configs

Contains the config files for each model depending on the chosen data set.

data

In this folder, you find all relevant python scripts to download and preprocess the data for running the models.

experiments

Contains the scripts for running all relevant experiments. To run them in the right order, follow these steps: To run them first change into the directory accordingly: cd experiments

  1. python seed_running.py
  2. python latent_experiment.py
  3. python epochs_experiment.py
  4. python kld_weight_experiment.py
  5. python latent_experiment.py
  6. python tune_models.py
  7. python run_best_configurations.py
  8. python deep_dive.py

library

This is the package for this repository, it stores all functions and (model) classes that are used. The package contains different modules

├── models2
├── architectures.py
├── eval_helpers.py
├── evaluator.py
├── postprocessing.py
├── utils.py
├── visualizer.py
└── viz_helpers.py

playground

Contains only some playground files which are absolutely not important to consider.

Packaging

Setup

You can setup Representation-Learning as follows: Note, all commands must be run from parent level of the repository.

  1. Install miniconda
  2. Create a conda environmnet for python 3.7 (conda create -n <env name> python=3.7)
  3. Clone this repository
  4. Install the required packages via pip install -r requirements.txt
  5. Install the local package library as described above.
  6. Download the data by moving to data folder cd data and executing python get_<dataset>_data.py
  7. Run code from run.py --config/config_file.yaml

Further notes on running experiments.

In case that hyperparameters want to be adjusted, you can do so by respectively adjusting the parameters set in the config files. Note, however, the parameter configurations listed we found worked best for the respective problem. If you want to run your own experiments, it is advisable to additionally specify the run_name and the experiment_name. This is required for mlflow to log adequately. An exemplary command could be as follows: python run.py --configs/MNIST/vae.yaml --run_name mnist_vae --experiment_name mnist_vae