[Project Page] [Paper] [Video]
This repo contains code for S4G (CoRL 2019). S4G is a grasping proposal algorithm to regress SE(3) pose from single camera point cloud(depth only, no RGB information). S4G is trained only on synthetic dataset with YCB Objects, it can generate to real world grasping with unseen objects that has never been used in the training.
It contains both training data generation code
and inference code
We also provide the pretrained model
for fast
trail with our S4G.
Install MuJoCo from: http://www.mujoco.org/ and put your MuJoCo licence to your install directory. If you already have MuJoCo on your computer, please skip this step.
git clone https://github.com/yzqin/s4g-release
cd s4g-release
pip install -r requirements.txt # python >= 3.6
data_gen
: training data generation
data_gen/mujoco
: random scene generation
data_gen/pcd_classes
: main classes to generate antipodal grasp and scores (training label)
data_gen/render
: render viewed point cloud for generated random scene (training input)
inference
: training and inference
inference/grasp_proposal
: main entry
cd s4g-release/inference/grasp_proposal/network_models/models/pointnet2_utils
python setup.py build_ext --inplace
cd s4g-release/inference/grasp_proposal
python grasp_proposal_test.py
It will predict the grasp pose based on the point cloud from 2638_view_0.p and visualize the scene and the predicted grasp poses. Note that many tricks, e.g. NMS, are not used in this minimal example
You will see something like that if it all the setup work:
cd s4g-release/data_gen
export PYTHONPATH=`pwd`
python data_generator/data_object_contact_point_generator.py # Generate object grasp pose
python python3 post_process_single_grasp.py # Post process grasp pose
The code above will generate grasp pose in s4g-release/objects/processed_single_object_grasp
as a pickle file for each
object. You can tune the hyper-parameters for grasp proposal searching according to your object mesh model.
python3 visualize_single_grasp.py
The Open3D viewer will show you the object point cloud (no grasp will show in the first stage). Then click the point you desired with left mouse button inside the viewer while holding the shift key to select point you desired. You will observe something similar as follows:
Then press q on the keyboard to finish the point selection stage. A new viewer window will pop up show the grasp poses corresponding to the point selected.
data_gen/data_generator
.More details on training data and training:
Data generation contains several steps: Random Scene Generation, Viewed Point Rendering, Scene(Complete) Point Generation, Grasp Pose Searching, Grasp Pose Post Processing For more details, you can refer to the directory of data_gen.
@inproceedings{qin2020s4g,
title={S4g: Amodal Single-View Single-Shot SE(3) Grasp Detection in Cluttered Scenes},
author={Qin, Yuzhe and Chen, Rui and Zhu, Hao and Song, Meng and Xu, Jing and Su, Hao},
booktitle={Conference on Robot Learning},
pages={53--65},
year={2020},
organization={PMLR}
}
Some file in this repository is based on the wonderful projects from GPD and PointNet-GPD. Thanks for authors of these projects to open source their code!