This demo performs object modeling and grasping with superquadrics [1] with the iCub robot.
This module implements a wrapper code for performing the superquadric modeling and grasping pipeline described in [1].
This wrapper code communicates with existing modules developed in robotology
and its structure can be summarized as follows:
box
, sphere
or cylinder
is in the field of view by querying the objectPropertyCollector
and acquires the 2D bounding box information of the object. If multiple objects are in front the robot, one of them is randomly chosen for performing the demo.
Note: The objects are classified in box
, sphere
or cylinder
since they are primary shapes used for improving the object modeling.lbpExtract module
provides multiple 2D blobs of the selected object. The demo code sends the 2D blobs of the object to the Structure From Motion module
for getting the relative 3D point clouds.superquadric-model
for computing several superquadrics modeling the object.superquadric-grasp module
, which computes suitable poses for the right and the left hand.superquadric-grasp
is asked to execute grasping.Here is a video of the running pipeline:Go to the top
More information are available in these slides.
In Linux systems, the code can be compiled as follows:
git clone https://github.com/robotology/superquadric-grasp-demo.git
mkdir build; cd build
ccmake ..
make install
The superquadric-grasp-demo
is a wrapper code which communicates with several modules in order to coordinate them for the execution of the demo:
superquadric-model
for reconstructing the superquadric modeling the objectsuperquadric-grasp
for computing the grasping poseSFM
for acquiring the object point cloudlbpExtract
and blobExtractor
for object segmentationcaffeCoder
, linearClassifier
, himRep
, iolStateMachineHandler
and objectPropertiesCollector
for object classificationrFSMtool
for the demo state machineThis demo has been designed in order to be automatically executed on the iCub robot. If you are interested in a interactive mode for launching the grasping algorithm, please visit the superquadric-grasp-example
repository.
In order to automatically run the superquadric-grasp-demo
, please:
yarprobotinterface
.cameras
.iKinGazeCtrl
, iKinCartsianSolver
- for both right and left arm. For a safe approach during the grasping, we recommend to launch also wholeBodyDynamics
and gravityCompensator
. We use in fact the estimate of the forces measured at the end-effector in order to stop the movement in case of collisions.skinManager
and skinManagerGui
and connect. Set the binarization filter
off
and the compensation gain
and the contact compensation gain
at the minimum values. If you do not want use the skin, we also provide a FingersPositionControl
.iCub_Grasp_Demo
xml.rfsmGui
will open. Play run
on the Gui to start the state machine executing the demo. More information about the Superquadric_Grasp_Demo
state machine are provided here.Before running the demo, it is recommended to correctly set up the modules. In particular:
superquadric-model
, we suggest to calibrate the stereo vision.superquadric-grasp
, we recommend to calibrate the robot following these instructions.The superquadric-grasp-demo
can be customized by the user by changing the configuration parameters of the superquadric-model
and superquadric-grasp
modules, in the proper configuration files (respectively: config-classes.ini and config.ini)).
Some useful options for the superquadric-grasp
module are the following:
lift_object
: available values: on
(default) / off
. If you want the robot to test if the pose is good enough for lifting the object.grasp
: available values: on
(default) / off
. If you want the robot to perform the grasp by using tactile feedback. If off
is selected, the robot just reaches the desired pose, without closing the hand.visual_servoing
: available values: on
/ off
(default). If you want to reach for the pose by using a markerless visual-servoing algorithm and an accurate hand pose estimation (more information are available here). (Currently, visual-servoing is available only for the right hand).
In the superquadric-grasp
repository we provide more information on how the visual-servoing approach is used for a fine reaching for the final pose.which_hand
: available values: right
, left
, or both
(default). This variable represents the hand for which the grasping pose is computed. In case both
is selected, the pose is computed for each hand and the best hand for grasping the object is automatically selected. If only one hand is chosen for the grasping pose computation, it will be consequently selected also for grasping the object.Note: in case the visual_servoing
is on
, the entire pipeline is slightly different:
Once the grasping pose is computed, the robot reaches in open loop an intermediate pose (S4). Then, a visual particle filter estimates the current robot hand pose (S5) and this is information is used by a visual-servoing controller in order to reach for the desired pose and grasp the object (S6).
The online documentation of this module is available here.
[1] G. Vezzani, U. Pattacini and L. Natale, "A grasping approach based on superquadric models", IEEE-RAS International Conference on Robotics and Automation 2017, pp 1579-1586.