atenpas / gpd

Detect 6-DOF grasp poses in point clouds
BSD 2-Clause "Simplified" License
606 stars 233 forks source link

trying to get grasps from .pcd model that fully cover the part #91

Open swhart115 opened 4 years ago

swhart115 commented 4 years ago

Hi, I've been trying to generate grasps off of some pcd models using the detect_grasps node, and can't seem to figure out how to get full coverage beyond just "hemispherical" results such as:. image (note i hacked the code to add some coloring :) ).

I assumed this was a camera issue, but no matter how i seem to set the camera_position in the cfg file it doesnt seem to make a difference. even, if i publish the pcd oriented differently, i still get grasps in the same positions relative to the actual part, so i no longer am convinced it is a camera issue. I also checked pcd normals, and everything seems to be in order....

I feel like i might be missing something and was wondering if you had any guidance here.

For completeness, my config file is:

# Path to config file for robot hand geometry
hand_geometry_filename = <some_path>/config/gpd/hand_geometry.cfg

# Path to config file for volume and image geometry
image_geometry_filename = 0

weights_file = /home/swhart/code/gpd2/models/lenet/3channels/params/

# Path to directory that contains neural network parameters
lenet_params_dir = 0 

# Preprocessing of point cloud
#   voxelize: if the cloud gets voxelized/downsampled
#   remove_outliers: if statistical outliers are removed from the cloud (used to remove noise)
#   workspace: the workspace of the robot (dimensions of a cube centered at origin of point cloud)
#   camera_position: the position of the camera from which the cloud was taken
#   sample_above_plane: only draws samples which do not belong to the table plane
voxelize = 0
voxel_size = 0.00005
remove_outliers = 0
workspace = -0.25 0.25 -0.25 0.25 -0.25 0.25
camera_position = 0 0 1
sample_above_plane = 0

# Grasp candidate generation
#   num_samples: number of samples to be drawn from the point cloud
#   num_threads: number of CPU threads to be used
#   nn_radius: neighborhood search radius for the local reference frame estimation
#   num_orientations: number of robot hand orientations to evaluate
#   num_finger_placements: number of finger placements to evaluate
#   hand_axes: axes about which the point neighborhood gets rotated (0: approach, 1: binormal, 2: axis)
#              (see https://raw.githubusercontent.com/atenpas/gpd/master/readme/hand_frame.png)
#   deepen_hand: if the hand is pushed forward onto the object
#   friction_coeff: angle of friction cone in degrees
#   min_viable: minimum number of points required on each side to be antipodal
num_samples = 500 # 100
num_threads = 4
nn_radius = 0.02
num_orientations = 12 # 20
num_finger_placements = 20
hand_axes = 0 1 2
deepen_hand = 1
friction_coeff = 5
min_viable = 200

# Filtering of candidates
#   min_aperture: the minimum gripper width
#   max_aperture: the maximum gripper width
#   workspace_grasps: dimensions of a cube centered at origin of point cloud; should be smaller than <workspace>
min_aperture = 0.0
max_aperture = 0.2
workspace_grasps = -0.20 0.20 -0.20 0.20 -0.20 0.20

# Filtering of candidates based on their approach direction
#   filter_approach_direction: turn filtering on/off
#   direction: the direction to compare against
#   angle_thresh: angle in radians above which grasps are filtered
filter_approach_direction = 0
direction = 1 0 0
thresh_rad = 2.0

# Clustering of grasps
#   min_inliers: minimum number of inliers per cluster; set to 0 to turn off clustering
min_inliers = 0

# Grasp selection
#   num_selected: number of selected grasps (sorted by score)
num_selected = 100

# Visualization (lead tp crash when used with Python interface)
#   plot_normals: plot the surface normals
#   plot_samples: plot the samples
#   plot_candidates: plot the grasp candidates
#   plot_filtered_candidates: plot the grasp candidates which remain after filtering
#   plot_valid_grasps: plot the candidates that are identified as valid grasps
#   plot_clustered_grasps: plot the grasps that after clustering
#   plot_selected_grasps: plot the selected grasps (final output)
plot_normals = 0
plot_samples = 0
plot_candidates = 0
plot_filtered_candidates = 0
plot_valid_grasps = 0
plot_clustered_grasps = 0 
plot_selected_grasps = 0
atenpas commented 4 years ago

What do the surface normals look like for this point cloud?

swhart115 commented 4 years ago

What do the surface normals look like for this point cloud?

Ok, its been a while since i looked at this, but i was able to dig up some debug images i had. Attached are three images. The first is the original STL in meshlab with normals displayed. We convert this to the point cloud, and the second image shows the normals from PCL. Finally, the third shows the normal from GPD.

So clearly there is a disconnect here. but it seems, at first glance, to be between the PCL and GPD step.

normals_stl_meshlab normal_pcl normals_gpd

atenpas commented 4 years ago

It might not be possible to get "full coverage" with GPD without writing some custom code. While the ROS node supports multiple viewpoints, the config files and executable programs coming with GPD were written for point clouds with a single camera viewpoint.

If you use the ROS node, you can send a CloudSamples or CloudIndexed message, both support multiple camera viewpoints. That should give the correct surface normals.

Without ROS, you could modify the input to have multiple viewpoints (here) and create a util::Cloud that takes viewpoints and camera sources (e.g., see this one). You would also need to provide the camera_source matrix. This basically imitates what happens when a CloudSamples/CloudIndexed message is send in the gpd_ros node.