yzqin / s4g-release

S4G: Amodal Single-view Single-Shot SE(3) Grasp Detection in Cluttered Scene
35 stars 7 forks source link
grasping point-cloud

S4G: Amodal Single-view Single-Shot SE(3) Grasp Detection in Cluttered Scenes

[Project Page] [Paper] [Video]

This repo contains code for S4G (CoRL 2019). S4G is a grasping proposal algorithm to regress SE(3) pose from single camera point cloud(depth only, no RGB information). S4G is trained only on synthetic dataset with YCB Objects, it can generate to real world grasping with unseen objects that has never been used in the training.

It contains both training data generation code and inference code We also provide the pretrained model for fast trail with our S4G. example result

Installation

  1. Install the MuJoCo. It is recommended to use conda to manage python package:

Install MuJoCo from: http://www.mujoco.org/ and put your MuJoCo licence to your install directory. If you already have MuJoCo on your computer, please skip this step.

  1. Install Python dependencies. It is recommended to create a conda env with all the Python dependencies.
git clone https://github.com/yzqin/s4g-release
cd s4g-release
pip install -r requirements.txt # python >= 3.6
  1. The file structure is listed as follows:

data_gen: training data generation data_gen/mujoco: random scene generation data_gen/pcd_classes: main classes to generate antipodal grasp and scores (training label) data_gen/render: render viewed point cloud for generated random scene (training input)

inference: training and inference inference/grasp_proposal: main entry

  1. Build PointNet CUDA Utils, need nvcc(CUDA compiler) >=10.0
cd s4g-release/inference/grasp_proposal/network_models/models/pointnet2_utils
python setup.py build_ext --inplace

Inference

  1. Try the minimal S4G with pretrained model:
cd s4g-release/inference/grasp_proposal
python grasp_proposal_test.py

It will predict the grasp pose based on the point cloud from 2638_view_0.p and visualize the scene and the predicted grasp poses. Note that many tricks, e.g. NMS, are not used in this minimal example

You will see something like that if it all the setup work: example result

  1. More details on the grasp proposal (inference time): You can refer to grasp_detector for more details of how to pre-process and post-process the data during inference.

Data Generation

  1. Try the minimal S4G data generation:
cd s4g-release/data_gen
export PYTHONPATH=`pwd`
python data_generator/data_object_contact_point_generator.py # Generate object grasp pose
python python3 post_process_single_grasp.py # Post process grasp pose

The code above will generate grasp pose in s4g-release/objects/processed_single_object_grasp as a pickle file for each object. You can tune the hyper-parameters for grasp proposal searching according to your object mesh model.

  1. Visualize the generated grasp
python3 visualize_single_grasp.py

The Open3D viewer will show you the object point cloud (no grasp will show in the first stage). Then click the point you desired with left mouse button inside the viewer while holding the shift key to select point you desired. You will observe something similar as follows:

example result

Then press q on the keyboard to finish the point selection stage. A new viewer window will pop up show the grasp poses corresponding to the point selected.

example result

  1. Then you can use the code to combine multiple single object grasp pose file into scene level grasp with the class in the data_gen/data_generator.

More Details

More details on training data and training:

Data generation contains several steps: Random Scene Generation, Viewed Point Rendering, Scene(Complete) Point Generation, Grasp Pose Searching, Grasp Pose Post Processing For more details, you can refer to the directory of data_gen.

Bibtex

@inproceedings{qin2020s4g,
  title={S4g: Amodal Single-View Single-Shot SE(3) Grasp Detection in Cluttered Scenes},
  author={Qin, Yuzhe and Chen, Rui and Zhu, Hao and Song, Meng and Xu, Jing and Su, Hao},
  booktitle={Conference on Robot Learning},
  pages={53--65},
  year={2020},
  organization={PMLR}
}

Acknowledgement

Some file in this repository is based on the wonderful projects from GPD and PointNet-GPD. Thanks for authors of these projects to open source their code!