snumprlab / cl-alfred

Official Implementation of CL-ALFRED (ICLR'24)
https://bhkim94.github.io/projects/CL-ALFRED/
GNU General Public License v3.0
18 stars 3 forks source link

CL-ALFRED

Online Continual Learning for Interactive Instruction Following Agents
Byeonghwi Kim, Minhyuk Seo, Jonghyun Choi
ICLR 2024

CL-ALFRED is a benchmark that continuously learns new types of behaviors and environments for household tasks in ALFRED. CL-ALFRED provides two incremental learning setups: Behavior Incremental Learning (Behavior-IL) to learn novel behaviors (task types) and Environment Incremental Learning (Environment-IL) to learn to complete tasks in novel environments.

We provide the code of the baselines and CAMA. The code is built upon i-Blurry and ABP.

CL-ALFRED

Environment

Clone repository

git clone https://github.com/snumprlab/cl-alfred.git
cd cl-alfred
export ALFRED_ROOT=$(pwd)

Install requirements

Due to different python version usage for training and evaluation, we need a conda env for each training and evaluation.

# Training environment
conda create -n cl-alfred-train python=3.8
conda activate cl-alfred-train
pip install -r requirements_train.txt
# Evaluation environment
conda create -n cl-alfred-eval python=3.6
conda activate cl-alfred-eval
pip install -r requirements_eval.txt

Install PyTorch

Install PyTorch from the official PyTorch site for both cl-alfred-train and cl-alfred-eval.

conda deactivate
conda activate cl-alfred-train
pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html

conda deactivate
conda activate cl-alfred-eval
pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html

Dataset Download

Pre-extracted features

Clone the Hugging Face repository to the path data/json_feat_2.1.0. This should include numberized annotation files, ResNet-18 features, the vocabulary file, etc.

git clone https://huggingface.co/datasets/byeonghwikim/abp_dataset data/json_feat_2.1.0

Note: It takes quite a large space (~1.6TB). FAQ: Why does it take so much space? $\rightarrow$ This is because 1) we use surrounding views (1 $\rightarrow$ 5 views) and 2) we cache all features of these views randomized by image augmentation used in MOCA for faster training.

Raw RGB images, depth masks, and segmentation labels (Optional)

We provide zip files that contain raw RGB images (and depth & segmentation masks) in the Hugging Face repository, which takes about 250GB in total. With these images, you can extract features yourself with this code. Or, you can build a smaller version of the dataset (e.g., using only egocentric views without surrounding views)! If you are interested in building the egocentric-only version of this dataset, try MOCA for an egocentric-view model!

Training

First, activate the training environment cl-alfred-train.

conda deactivate
conda activate cl-alfred-train

To train a model, run train_seq2seq.py with the hyper-parameters below.

For example, if you want train CAMA for the Behavior-IL setup with a stream seed 1 and save the weights in exp/behavior_il/cama/s1, the command may look like below.

python models/train/train_seq2seq.py        \
    --incremental_setup behavior_il         \
    --mode cama                             \
    --stream_seed 1                         \
    --dout exp/behavior_il/cama/s1

Evaluation

First, activate the evaluation environment cl-alfred-eval.

conda deactivate
conda activate cl-alfred-eval

To evaluate a model, run eval_seq2seq.py with the hyper-parameters below.

If you want to evaluate our model saved in exp/behavior_il/cama/s1/net_epoch_000002251_look_at_obj_in_light.pth in the seen validation for the current task look_at_obj_in_light of the Behavior-IL setup trained with a random stream sequence 1, you may use the command below.

python models/eval/eval_seq2seq.py                                                    \
    --model_path exp/behavior_il/cama/s1/net_epoch_000002251_look_at_obj_in_light.pth \
    --eval_split valid_seen                                                           \
    --incremental_setup behavior_il                                                   \
    --incremental_type look_at_obj_in_light                                           \
    --stream_seed 1                                                                   \
    --num_threads 3                                                                   \
    --x_display 1                                                                     \
    --gpu

Note: Choose your available display number x_display.
Note: Adjust your thread number based on your system num_threads.

Hardware

Trained and tested on:

License

GNU GENERAL PUBLIC LICENSE

Citation

CL-ALFRED

@inproceedings{kim2024online,
  title={Online Continual Learning for Interactive Instruction Following Agents},
  author={Kim, Byeonghwi and Seo, Minhyuk and Choi, Jonghyun},
  booktitle={ICLR},
  year={2024}
}

i-Blurry

@inproceedings{koh2022online,
  title={Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference},
  author={Koh, Hyunseo and Kim, Dahyun and Ha, Jung-Woo and Choi, Jonghyun},
  booktitle={ICLR},
  year={2022}
}

ABP

@inproceedings{kim2021agent,
  author    = {Kim, Byeonghwi and Bhambri, Suvaansh and Singh, Kunal Pratap and Mottaghi, Roozbeh and Choi, Jonghyun},
  title     = {Agent with the Big Picture: Perceiving Surroundings for Interactive Instruction Following},
  booktitle = {Embodied AI Workshop @ CVPR 2021},
  year      = {2021},
}

ALFRED

@inproceedings{ALFRED20,
  title ={{ALFRED: A Benchmark for Interpreting Grounded
           Instructions for Everyday Tasks}},
  author={Mohit Shridhar and Jesse Thomason and Daniel Gordon and Yonatan Bisk and
          Winson Han and Roozbeh Mottaghi and Luke Zettlemoyer and Dieter Fox},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2020},
  url  = {https://arxiv.org/abs/1912.01734}
}

Acknowlegment

This work was partly supported by the NRF grant (No.2022R1A2C4002300, 15%) and IITP grants (No.2020-0-01361 (10%, Yonsei AI), No.2021-0-01343 (5%, SNU AI), No.2022-0-00077 (10%), No.2022-0-00113 (20%), No.2022-0-00959 (15%), No.2022-0-00871 (15%), No.2021-0-02068 (5%, AI Innov. Hub), No.2022-0-00951 (5%)) funded by the Korea government (MSIT).