This repository is by Brandon Amos, Samuel Cohen, Giulia Luise, and Ievgen Redko and contains the source code building on JAX and OTT to reproduce the experiments for our Meta Optimal Transport paper.
Yijiang Pang has posted an unofficial PyTorch re-implementation in the discrete setting here.
After cloning this repository and installing PyTorch on your system, you can install dependencies with:
pip install -r requirements.txt
set up the code with:
python3 setup.py develop
This code will automatically download the MNIST dataset for training and evaluation. You can run the training code with:
./train_discrete.py data=mnist
This will create a directory saving out the model and log informations, which you can evaluate and plot with:
./eval_discrete.py <exp_dir>
./plot_mnist.py <exp_dir>
First download the
2020 Tiff data at 15-minute resolution
and save the file to data/pop-15min.tif
.
Then you can run the training code with:
./train_discrete.py data=world
This will create a directory saving out the model and log informations, which you can evaluate and plot with:
./eval_discrete.py <exp_dir>
./plot_world_pair.py <exp_dir>
First download images from WikiArt into data/paintings
by running:
./data/download-wikiart.py
Then you can run the training code with:
./train_color_meta.py
This will create a directory saving out the model and log informations, which you can evaluate and plot with:
./eval_color.py <exp_dir>
Our main video can be re-created by running the following scripts:
./create_video_mnist.py <mnist_exp_dir>
./create_video_world.py <world_exp_dir>
./create_video_color.py <color_exp_dir>
If you find this repository helpful for your publications, please consider citing our paper:
@misc{amos2022meta,
title={Meta Optimal Transport},
author={Brandon Amos and Samuel Cohen and Giulia Luise and Ievgen Redko},
year={2022},
eprint={2206.05262},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
The source code in this repository is licensed under the CC BY-NC 4.0 License.