This is the authors' official PyTorch implementation for CIDA. This repo contains code for experiments in the ICML 2020 paper 'Continuously Indexed Domain Adaptation'.
Essentially CIDA asks the question of whether and how to go beyond current (categorical) domain adaptation regime and proposes the first approach to adapt across continuously indexed domains. For example, instead of adapting from domain A to domain B, we would like to simultaneously adapt across infintely many domains in a manifold. This allows us to go beyond domain adaption and perform both domain interpolation and domain extrapolation. See the following toy example.
Naturally categorical domain adaptation works on a finite number of domains while CIDA works on infinitely many domains. See the following comparison.
For a more visual introduction, feel free to take a look at this video.
If we use domains [1, 6] as source domains and the rest as target domains, below are some sample results from previous domain adaptation methods and CIDA, where CIDA successfully learns how the decision boundary evolves with the domain index.
We provide a simple yet effective learning framework with theoretical guarantees (see the Theory section at the end of this README). Below is a quick comparison between previous domain adaptation methods and CIDA (differences marked in red).
Below are some IPython Notebooks for the experiments. We strongly recommend starting from the simplest case, i.e., Experiments for Toy Datasets (Quarter Circle) to get familar with the data and settings.
Besides using IPython notebooks, you can also directly run the following command for the Rotating MNIST experiments inside the folder 'rotatingMNIST':
bash run_all_exp.sh
In the intra-dataset setting, we consider both domain extrapolation and domain interpolation (see the figure below).
Denoting the domain index as u and the encoding as z, we have (check the paper for full theorems):
Graph-Relational Domain Adaptation
Zihao Xu, Hao He, Guang-He Lee, Yuyang Wang, Hao Wang
Tenth International Conference on Learning Representations (ICLR), 2022
[Paper] [Code] [Talk] [Slides]
Continuously Indexed Domain Adaptation
@inproceedings{DBLP:conf/icml/WangHK20,
author = {Hao Wang and
Hao He and
Dina Katabi},
title = {Continuously Indexed Domain Adaptation},
booktitle = {ICML},
year = {2020}
}