Closed jaymwong closed 6 years ago
Thanks for this! I have a few small questions. What's the difference between workspace
and workspace_grasps
? Also, how does changing camera_position
affect how the workspace is interpreted? As I understand, the default [0, 0, 0] means the camera's frame is the origin of the coordinates in which workspace is defined. If I change it to the [x, y, z] of the camera frame relative to my robot's base frame, would the workspace be now defined with respect to this base frame?
The algorithm first samples points directly from the point cloud. This is where workspace
comes in: before sampling, the point cloud is cut down to the dimensions of the workspace. In contrast, workspace_grasps
come in after the grasp candidates have been generated. It filters out grasps for which the fingers of the robot hand or the approach vector lie outside the dimensions of the workspace. The position of the fingers is related to the samples, but it's simpler to just check the final configuration of the grasps. The 1st parameter helps to speed up the algorithm because less samples need to be evaluated. The 2nd parameter helps to remove unwanted grasps, e.g., outside of the robot's workspace (this is partially done by the 1st parameter, the 2nd just filters more). If both parameters are used, the dimensions of workspace_grasps
should always be smaller than workspace
.
Changing the camera_position
from [0,0,0] to [x,y,z] would only work if both frames have the same orientation.
Thanks for your explanation! If I want to detect grasps on objects in a region of interest (say, a 3D bounding box), is there a difference between (a) Using this service to constrain the workspace to the region of interest (while still using the raw PointCloud2 as the input to the GPD node), and (b) Creating a CloudSampled or CloudIndexed or SamplesMsg message from the region of interest, and passing that to GPD (while having a large enough workspace to cover the entire point cloud)?
If option (b) simply crops the input cloud to the samples or indices we provide, I'm assuming it would have the problem of useless grasps being generated at the boundaries of that cropped region?
Using workspace parameters is different from using samples/indices. The complete point cloud should always be passed to GPD so that it can be used to check the grasps against collisions (e.g., collisions with a tabletop).
The workspace
parameter cuts the point cloud. So, if you cut out the table plane, you might have grasps that approach an object from below the table.
Using a CloudSamples/CloudIndexed
message is the ideal way to go. In this way, you can provide the complete point cloud (potentially downsampled/voxelized) for collision checking, and use the samples/indices to focus where to look for grasps and for speed up.
Thank you! Using CloudSamples
produced much better results for me.
Adding a feature to allow external nodes to dynamically change the
workspace
,workspace_grasps
, andcamera_position
parameters within thegrasp_detection_node
.