YueYANG1996 / LaBo

CVPR 2023: Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification
https://arxiv.org/abs/2211.11158
80 stars 6 forks source link
few-shot-learning image-classification interpretability language-model

LaBo

Code for the CVPR 2023 paper "Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification"

Set up environments

We run our experiments using Python 3.9.13. You can install the required packages using:

conda create --name labo python=3.9.13
conda activate labo
pip install -r requirements.txt

You need to modify the source code of Apricot to run the submodular optimization. See details here.

Directories

Linear Probe

To get the linear probe performance, just run:

sh linear_probe.sh {DATASET} {SHOTS} {CLIP SIZE}

For example, for flower dataset 1-shot with ViT-L/14 image encoder, the command is:

sh linear_probe.sh flower 1 ViT-L/14

The code will automatically encode the images and run a hyperparameter search on the L2 regularization using the dev set. The best validation and test performance will be saved in the output/linear_probe/{DATASET}.txt.

LaBo Training

To train the LaBo, run the following command:

sh labo_train.sh {SHOTS} {DATASET}

The training logs will be uploaded to the wandb. You may need to set up your wandb account locally. After reaching the maximum epochs, the checkpoint with the highest validation accuracy and the corresponding config file will be saved to exp/asso_opt/{DATASET}/{DATASET}_{SHOT}shot_fac/.

LaBo Testing

To get the test performance, use the model checkpoint and corresponding configs saved in exp/asso_opt/{DATASET}/{DATASET}_{SHOT}shot_fac/ and run:

sh labo_test.sh {CONFIG_PATH} {CHECKPOINT_PATH}

The test accuracy will be printed to output/asso_opt/{DATASET}.txt.

Please cite our paper if you find it useful!

@inproceedings{yang2023language,
  title={Language in a bottle: Language model guided concept bottlenecks for interpretable image classification},
  author={Yang, Yue and Panagopoulou, Artemis and Zhou, Shenghao and Jin, Daniel and Callison-Burch, Chris and Yatskar, Mark},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={19187--19197},
  year={2023}
}