This is a PyTorch implementation of the following paper:
To create a conda environment l2d
with all necessary dependencies run: conda env create -f environment.yml
or use the following explicit instructions:
conda create --name l2d python=3.9
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
conda install numpy scipy matplotlib jupyterlab jupyter_console jupyter_client scikit-learn
pip install attrdict
To reproduce figure 3 on varying population diversity on image classification tasks:
DATASET
{gtsrb/cifar10/ham10000}
/data/HAM10000/README.md
to setup datasetL2D
{single/pop}
EXPERT_OVERLAP_PROB
{0.1/0.2/0.4/0.6/0.8/0.95}
SEED
In the case of cifar10 and ham10000, the networks are warmstarted and so we first need to train a stand-alone classifier:
python train_classifier.py --seed=[SEED] --dataset=[DATASET]
Then run bash train_[DATASET].sh [L2D] [EXPERT_OVERLAP_PROB] train [SEED]
To reproduce figure 4 on CIFAR-20 which also has an additional method using conditional neural process with attention mechanism:
L2D
{single/pop/pop_attn}
EXPERT_OVERLAP_PROB
{0.1/0.2/0.4/0.6/0.8/0.95/1.0}
Again we need to pretrain a stand-alone classifier: python train_classifier.py --seed=[SEED] --dataset=cifar20_100
Then run: bash train_cifar20_100.sh [L2D] [EXPERT_OVERLAP_PROB] train [SEED]
This codebase is largely an extension of the codebases of OvA-L2D [Verma & Nalisnick] and learn-to-defer [Mozannar & Sontag]. We also acknowledge code related to attention mechanism from TNP-pytorch [Nguyen & Grover] and bnp [Lee et. al.].
Please open an issue in this repository or contact Dharmesh.
Please consider citing our conference paper
@inproceedings{tailor2024learning,
title = {{Learning to Defer to a Population: A Meta-Learning Approach}},
booktitle = {Proceedings of the 27th International Conference on Artificial Intelligence and Statistics},
author = {Tailor, Dharmesh and Patra, Aditya and Verma, Rajeev and Manggala, Putra and Nalisnick, Eric},
year = {2024}
}