NVlabs / handover-sim

A simulation environment and benchmark for human-to-robot object handovers
https://handover-sim.github.io
BSD 3-Clause "New" or "Revised" License
91 stars 14 forks source link

Handover-Sim

Handover-Sim is a Python-based simulation environment and benchmark for human-to-robot object handovers. The environment and benchmark were initially described in an ICRA 2022 paper:

HandoverSim: A Simulation Framework and Benchmark for Human-to-Robot Object Handovers
Yu-Wei Chao, Chris Paxton, Yu Xiang, Wei Yang, Balakumar Sundaralingam, Tao Chen, Adithyavairavan Murali, Maya Cakmak, Dieter Fox
IEEE International Conference on Robotics and Automation (ICRA), 2022
[ paper ] [ video ] [ arXiv ] [ project site ]

Citing Handover-Sim

@INPROCEEDINGS{chao:icra2022,
  author    = {Yu-Wei Chao and Chris Paxton and Yu Xiang and Wei Yang and Balakumar Sundaralingam and Tao Chen and Adithyavairavan Murali and Maya Cakmak and Dieter Fox},
  booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
  title     = {{HandoverSim}: A Simulation Framework and Benchmark for Human-to-Robot Object Handovers},
  year      = {2022},
}

License

Handover-Sim is released under the BSD 3-Clause License.

Acknowledgements

This repo is based on a Python project template created by Rowland O'Flaherty.

Contents

  1. Prerequisites
  2. Installation
  3. Running Demos
  4. Benchmarking Baselines
    1. Yang et al. ICRA 2021
    2. OMG Planner
    3. GA-DDPG
  5. Evaluation
  6. Reproducing ICRA 2022 Results
  7. Rendering from Result and Saving Rendering

Prerequisites

This code is tested with Python 3.8 on Ubuntu 20.04.

Installation

For good practice for Python package management, it is recommended to install the package into a virtual environment (e.g., virtualenv or conda).

  1. Clone the repo with --recursive and and cd into it:

    git clone --recursive https://github.com/NVlabs/handover-sim.git
    cd handover-sim
  2. Install handover-sim and submodule mano_pybullet as Python packages:

    pip install -e .
    pip install --no-deps -e ./mano_pybullet
  3. Download MANO models and code (mano_v1_2.zip) from the MANO website and place the file under handover/data/. Unzip with:

    cd handover/data
    unzip mano_v1_2.zip
    cd ../..

    This will extract a folder handover/data/mano_v1_2/.

  4. Download the DexYCB dataset.

    Option 1: Download cached dataset: (recommended)

    1. Download dex-ycb-cache-20220323.tar.gz (507M) and place the file under handover/data/. Extract with:

      cd handover/data
      tar zxvf dex-ycb-cache-20220323.tar.gz
      cd ../..

      This will extract a folder handover/data/dex-ycb-cache/.

    Option 2: Download full dataset and cache the data:

    1. Download the DexYCB dataset from the DexYCB project site.

    2. Set the environment variable for dataset path:

      export DEX_YCB_DIR=/path/to/dex-ycb

      $DEX_YCB_DIR should be a folder with the following structure:

      ├── 20200709-subject-01/
      ├── 20200813-subject-02/
      ├── ...
      ├── calibration/
      └── models/
    3. Cache the dataset:

      python handover/data/cache_dex_ycb_data.py

      The cached dataset will be saved to handover/data/dex-ycb-cache/.

  5. Compile assets.

    1. Download assets-3rd-party-20220511.tar.gz (155M) and place the file under handover/data/. Extract with:

      cd handover/data
      tar zxvf assets-3rd-party-20220511.tar.gz
      cd ../..

      This will extract a folder handover/data/assets/ with 3rd party assets. See handover/data/README.md for the source of these assets.

    2. Compile assets:

      ./handover/data/compile_assets.sh

      The compiled assets will be saved to handover/data/assets/.

Running Demos

  1. Running a handover environment:

    python examples/demo_handover_env.py \
      SIM.RENDER True
  2. Running a planned trajectory:

    python examples/demo_trajectory.py \
      SIM.RENDER True
  3. Running a benchmark wrapper:

    python examples/demo_benchmark_wrapper.py \
      SIM.RENDER True \
      BENCHMARK.DRAW_GOAL True

    This will run the same trajectory as in demo_trajectory.py above but will also draw the goal region in the visualizer window and print out the benchmark status in the terminal.

Benchmarking Baselines

We benchmarked three baselines on Handover-Sim:

  1. OMG Planner - GitHub
  2. Yang et al. ICRA 2021 - arXiv
  3. GA-DDPG - GitHub
OMG Planner Yang et al. ICRA 2021
GA-DDPG (hold) GA-DDPG (w/o hold)

As described in the paper Sec. IV "Training and Evaluation Setup", we divide the data into different setups (s0, s1, s2, s3) and splits (train, val, test). We benchmarked these baselines on the test split of each setup.

Below we provide instructions for setting up and running benchmark for these baselines.

Yang et al. ICRA 2021

OMG Planner

GA-DDPG

Evaluation

Reproducing ICRA 2022 Results

We provide the result folders of the benchmarks reported in the ICRA 2022 paper. You can run evaluation on these files and reproduce the exact numbers in the paper.

To run the evaluation, you need to first download the ICRA 2022 results.

./results/fetch_icra2022_results.sh

This will extract a folder results/icra2022_results/ containing the result folders.

You can now run evaluation on these result folders. For example, for Yang et al. ICRA 2021 on s0, run:

python examples/evaluate_benchmark.py \
  --res_dir results/icra2022_results/2022-02-28_08-57-34_yang-icra2021_s0_test

You should see the exact same result shown in the example of the Evaluation section.

The full set of evaluation commands can be found in examples/all_icra2022_results_eval.sh.

Rendering from Result and Saving Rendering