NVlabs / handover-sim2real

Official code for CVPR'23 paper: Learning Human-to-Robot Handovers from Point Clouds
https://handover-sim2real.github.io
Other
70 stars 13 forks source link

Handover-Sim2Real

Handover-Sim2Real is the official code for the following CVPR 2023 paper:

Learning Human-to-Robot Handovers from Point Clouds
Sammy Christen, Wei Yang, Claudia Pérez-D'Arpino, Otmar Hilliges, Dieter Fox, Yu-Wei Chao
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023
[ arXiv ] [ video ] [ project site ]

Citing Handover-Sim2Real

@INPROCEEDINGS{christen:cvpr2023,
  author    = {Sammy Christen and Wei Yang and Claudia P\'{e}rez-D'Arpino and Otmar Hilliges and Dieter Fox and Yu-Wei Chao},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  title     = {Learning Human-to-Robot Handovers from Point Clouds},
  year      = {2023},
}

License

Handover-Sim2Real is released under the NVIDIA License.

The pre-trained models are licensed under CC BY-NC-SA 4.0.

Acknowledgements

This repo is based on a Python project template created by Rowland O'Flaherty.

Contents

  1. Prerequisites
  2. Installation
  3. Quick Demo with Pre-trained Model
  4. Training
    1. Reproducibility and Ray
  5. Testing
  6. Evaluation
  7. Reproducing CVPR 2023 Results
  8. Rendering from Result and Saving Rendering
  9. Beyond s0 Setup

Prerequisites

This code is tested with Python 3.8 on Ubuntu 20.04.

Installation

For good practice for Python package management, it is recommended to install the package into a virtual environment (e.g., virtualenv or conda).

First, clone the repo with --recursive and cd into it:

git clone --recursive https://github.com/NVlabs/handover-sim2real.git
cd handover-sim2real

Installation consists of four modules:

  1. handover-sim2real (main repo)
  2. handover-sim (submodule)
  3. GA-DDPG (submodule)
  4. OMG-Planner (submodule): Can be skipped if you are not running Training.

Below are the step-by-step installation commands:

  1. handover-sim2real (main repo)

    # Install handover-sim2real as Python pacakge.
    pip install -e .
  2. handover-sim (submodule)

    Before running the commands below, download MANO models and code (mano_v1_2.zip) from the MANO website and place the file under handover-sim/handover/data/.

    cd handover-sim
    
    # Install handover-sim and submodule mano_pybullet as Python package.
    pip install --no-deps -e .
    pip install --no-deps -e ./mano_pybullet
    
    cd handover/data
    
    # Unzip mano_v1_2.zip.
    unzip mano_v1_2.zip
    
    # Download DexYCB dataset.
    gdown 1Jqe2iqI7inoEdE3BL4vEs25eT5M7aUHd
    tar zxvf dex-ycb-cache-20220323.tar.gz
    
    # Compile assets.
    gdown 1tDiXvW5vwJDOCgK61VEsFaZ7Z00gF0vj
    tar zxvf assets-3rd-party-20220511.tar.gz
    cd ../..
    ./handover/data/compile_assets.sh
    
    cd ..

    For more details, see the handover-sim repo.

  3. GA-DDPG (submodule)

    cd GA-DDPG
    
    # Install Pointnet2_PyTorch as Python package.
    git clone https://github.com/liruiw/Pointnet2_PyTorch
    cd Pointnet2_PyTorch
    git checkout dabe33a
    pip install --no-deps -e ./pointnet2_ops_lib
    cd ..
    
    # Download data.
    gdown 136rLjyjFFRMyVxUZT6txB5XR2Ct_LNWC
    unzip shared_data.zip -d data
    
    cd ..

    For more details, see the GA-DDPG repo.

  4. OMG-Planner (submodule): Can be skipped if you are not running Training.

    # Install Ubuntu packages.
    # - libassimp-dev is required for pyassimp.
    # - libegl-dev is required for ycb_renderer.
    # - libgles2 is required for ycb_renderer.
    # - libglib2.0-0 is required for opencv-python.
    # - libxslt1-dev is required for lxml.
    apt install \
        libassimp-dev \
        libegl-dev \
        libgles2 \
        libglib2.0-0 \
        libxslt1-dev
    
    cd OMG-Planner
    
    # Install ycb_render.
    cd ycb_render
    python setup.py develop
    cd ..
    
    # Install eigen.
    git clone https://gitlab.com/libeigen/eigen.git
    cd eigen
    git checkout 3.4.0
    mkdir -p release && mkdir -p build && cd build
    cmake .. \
      -DCMAKE_INSTALL_PREFIX=$( cd ../release && pwd )
    make -j8
    make install
    cd ../..
    
    # Install Sophus.
    cd Sophus
    mkdir -p release && mkdir -p build && cd build
    cmake .. \
      -DCMAKE_INSTALL_PREFIX=$( cd ../release && pwd ) \
      -DEIGEN3_INCLUDE_DIR=$( cd ../../eigen/release/include/eigen3 && pwd )
    make -j8
    make install
    cd ../..
    
    # Install layers.
    cd layers
    sed -i "s@/usr/local/include/eigen3\", \"/usr/local/include@$( cd ../eigen/release/include/eigen3 && pwd )\", \"$( cd ../Sophus/release/include && pwd )@g" setup.py
    python setup.py install
    cd ..
    
    # Install PyKDL.
    cd orocos_kinematics_dynamics
    cd sip-4.19.3
    python configure.py
    make -j8
    make install
    cd ../orocos_kdl
    mkdir -p release && mkdir -p build && cd build
    cmake .. \
      -DCMAKE_INSTALL_PREFIX=$( cd ../release && pwd ) \
      -DEIGEN3_INCLUDE_DIR=$( cd ../../../eigen/release/include/eigen3 && pwd )
    make -j8
    make install
    cd ../../python_orocos_kdl
    mkdir -p build && cd build
    # ** IF YOU USE VIRTUALENV: USE $VIRTUAL_ENV BELOW **
    # ** IF YOU USE CONDA: REMOVE THE -DPYTHON_EXECUTABLE FLAG **
    # ** IF YOU USE NEITHER VIRTUALENV NOR CONDA: YOU MAY NEED TO EDIT -DPYTHON_EXECUTABLE **
    cmake .. \
      -DPYTHON_EXECUTABLE=$VIRTUAL_ENV/bin/python \
      -DCMAKE_PREFIX_PATH=$( cd ../../orocos_kdl/release && pwd )
    make -j8
    # ** IF YOU USE CONDA: REPLACE $VIRTUAL_ENV WITH $CONDA_PREFIX **
    cp PyKDL.so $VIRTUAL_ENV/lib/python3.8/site-packages
    cd ../../..
    
    # Download data.
    gdown 1tHPAQ2aPdkp8cwtFP4gs4wdcP02jfGpH
    unzip data.zip
    
    cd ..

    For more details, see the OMG-Planner repo.

Quick Demo with Pre-trained Model

Download the CVPR 2023 pre-trained models and grasp predictor:

# Download CVPR 2023 models.
./output/fetch_cvpr2023_models.sh

# Download grasp predictor.
./output/fetch_grasp_trigger_PRE_2.sh

Run:

GADDPG_DIR=GA-DDPG CUDA_VISIBLE_DEVICES=0 python examples/test.py \
  --model-dir output/cvpr2023_models/2022-10-16_08-48-30_finetune_5_s0_train \
  --without-hold \
  SIM.RENDER True \
  SIM.INIT_VIEWER_CAMERA_POSITION "(+1.6947, -0.1000, +1.6739)" \
  SIM.INIT_VIEWER_CAMERA_TARGET "(+0.0200, -0.1000, +0.9100)"

This will:

  1. Open a visualizer window.
  2. Go through each scene in the test split of s0 (see handover-sim for more details).
  3. Execute the actions generated from the without hold (aka simultaneous) policy using the pre-trained model output/cvpr2023_models/2022-10-16_08-48-30_finetune_5_s0_train.

Training

We follow the protocol of the handover-sim benchmark for training and testing. Below we show how to train a model on the train split of the s0 setup.

The training process comprises two stages:

  1. pretraining.
  2. finetuning.

See the CVPR 2023 paper for more details.

Reproducibility and Ray

As described above, we use Ray to speed up training by spawning multiple worker processes. However, this also makes data collection asynchronous and therefore the training process non-deterministic.

For development and debugging purposes, it may be beneficial to enforce reproducibility. Therefore, we also provide a way to run training with Ray disabled. However, the job will also take longer to complete, since we can only use one process to run data collection together with other training routines.

To disable Ray, you just need to remove the --use-ray flag.

Testing

Again, we follow the protocol of the handover-sim benchmark for training and testing. Below we show how to test a trained model on the test split of the s0 setup.

We test with two settings for the policy:

  1. hold (aka sequential, same as the setting in the pretraining stage).
  2. without-hold (aka simultaneous, same as the setting in the finetuning stage).

See the CVPR 2023 paper Sec. 5.1 "Simulation Evaluation" for more details.

We first provide an example by testing on the CVPR 2023 pre-trained models

Beside testing on the pre-trained models, you can also test your own trained model from the Training section. All you need is to set the --model-dir argument.

Evaluation

We use the same code from handover-sim for evaluation. Also see their Evaluation section.

To evaluate the result of a testing run, all you need is the result folder generated from running the benchmark. For example, if your result folder is results/2022-11-09_17-55-43_handover-sim2real-wo-hold_finetune_5_s0_test/, run the following command:

python handover-sim/examples/evaluate_benchmark.py \
  --res_dir results/2022-11-09_17-55-43_handover-sim2real-wo-hold_finetune_5_s0_test

You should see an output similar to the following in the terminal:

```
2023-04-14 07:09:09: Running evaluation for results/2022-11-09_17-55-43_handover-sim2real-wo-hold_finetune_5_s0_test
2023-04-14 07:09:09: Evaluation results:
|  success rate   |    mean accum time (s)    |                    failure (%)                     |
|      (%)        |  exec  |  plan  |  total  |  hand contact   |   object drop   |    timeout     |
|:---------------:|:------:|:------:|:-------:|:---------------:|:---------------:|:--------------:|
| 68.06 ( 98/144) | 6.206  | 0.175  |  6.380  | 10.42 ( 15/144) | 15.97 ( 23/144) | 5.56 (  8/144) |
2023-04-14 07:09:09: Printing scene ids
2023-04-14 07:09:09: Success (98 scenes):
---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---
  0    1    3    4    5    6    7    8    9   10   12   14   15   16   18   19   20   21   22   23
 26   29   30   31   34   37   38   39   41   43   44   46   47   48   49   51   53   54   56   57
 59   60   62   64   66   67   68   69   70   71   72   73   74   75   76   78   80   81   82   86
 89   90   92   93   96   97  100  101  103  105  108  109  110  111  113  114  116  118  120  121
122  123  125  126  127  128  129  130  131  132  133  134  137  139  140  141  142  143
---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---
2023-04-14 07:09:09: Failure - hand contact (15 scenes):
---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---
  2   11   40   42   58   61   65   77   79   91   94   98  102  112  119
---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---
2023-04-14 07:09:09: Failure - object drop (23 scenes):
---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---
 13   17   25   27   28   35   36   45   52   55   63   83   84   85   88   95  106  107  115  117
135  136  138
---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---  ---
2023-04-14 07:09:09: Failure - timeout (8 scenes):
---  ---  ---  ---  ---  ---  ---  ---
 24   32   33   50   87   99  104  124
---  ---  ---  ---  ---  ---  ---  ---
2023-04-14 07:09:09: Evaluation complete.
```

The same output will also be logged to results/2022-11-09_17-55-43_handover-sim2real-wo-hold_finetune_5_s0_test/evaluate.log.

Reproducing CVPR 2023 Results

We provide the result folders of the benchmarks reported in the CVPR 2023 paper. You can run evaluation on these files and reproduce the exact numbers in the paper.

To run the evaluation, you need to first download the CVPR 2023 results.

# Download CVPR 2023 results.
./results/fetch_cvpr2023_results.sh

This will extract a folder results/cvpr2023_results/ containing the result folders.

You can now run evaluation on these result folders. For example, for Ours + simultaneous (aka without-hold) on s0 (see CVPR 2023 paper Tab. 1 and 2), run:

# Seed 1
python handover-sim/examples/evaluate_benchmark.py \
  --res_dir results/cvpr2023_results/2022-11-09_16-02-29_handover-sim2real-wo-hold_finetune_1_s0_test
# Seed 4
python handover-sim/examples/evaluate_benchmark.py \
  --res_dir results/cvpr2023_results/2022-11-09_17-27-28_handover-sim2real-wo-hold_finetune_4_s0_test
# Seed 5
python handover-sim/examples/evaluate_benchmark.py \
  --res_dir results/cvpr2023_results/2022-11-09_17-55-43_handover-sim2real-wo-hold_finetune_5_s0_test

If you averge the numbers over these three evaluation runs, you should be able to reproduce the corresponding numbers in the paper.

Also for 2022-11-09_17-55-43_handover-sim2real-wo-hold_finetune_5_s0_test, you should see the exact same result shown in the example of the Evaluation section.

The full set of evaluation commands can be found in examples/all_cvpr2023_results_eval.sh.

Rendering from Result and Saving Rendering

We use the same code from handover-sim for rendering from result and saving rendering. Also see their Rendering from Result and Saving Rendering section.

Beyond s0 Setup

The handover-sim benchmark provides four different setups (s0, s1, s2, s3), by splitting the scenes into different train, val, test splits. The commands provided above for training and testing will run on the s0 setup.

To run on other setups, you just need to set BENCHMARK.SETUP to the setup name in the training and testing commands (the default is s0).

Below we use s1 as an example. Simply change s1 to s2 or s3 for those setups.

For pretraining with random seed 1, run:

GADDPG_DIR=GA-DDPG OMG_PLANNER_DIR=OMG-Planner CUDA_VISIBLE_DEVICES=0 python examples/train.py \
  --cfg-file examples/pretrain.yaml \
  --seed 1 \
  --use-ray \
  BENCHMARK.SETUP s1

For finetuning with random seed 1, if your pretraining output folder is output/2023-10-18_00-00-00_pretrain_1_s1_train/, run:

GADDPG_DIR=GA-DDPG OMG_PLANNER_DIR=OMG-Planner CUDA_VISIBLE_DEVICES=0 python examples/train.py \
  --cfg-file examples/finetune.yaml \
  --seed 1 \
  --use-ray \
  --use-grasp-predictor \
  --pretrained-dir output/2023-10-18_00-00-00_pretrain_1_s1_train \
  BENCHMARK.SETUP s1

To test the trained model in output/2023-10-19_00-00-00_finetune_1_s1_train/, for hold (aka sequential), run:

GADDPG_DIR=GA-DDPG CUDA_VISIBLE_DEVICES=0 python examples/test.py \
  --model-dir output/2023-10-19_00-00-00_finetune_1_s1_train \
  --name finetune_1 \
  BENCHMARK.SETUP s1 \
  BENCHMARK.SAVE_RESULT True

and for without-hold (aka simultaneous), run:

GADDPG_DIR=GA-DDPG CUDA_VISIBLE_DEVICES=0 python examples/test.py \
  --model-dir output/2023-10-19_00-00-00_finetune_1_s1_train \
  --without-hold \
  --name finetune_1 \
  BENCHMARK.SETUP s1 \
  BENCHMARK.SAVE_RESULT True

For evaluation and rendering, the same instructions provided in the previous sections apply to all the setups (see Evaluation and Rendering from Result and Saving Rendering). All you need is to set --res_dir to the result directory generated by testing.