dougsm / ggcnn

Generative Grasping CNN from "Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach" (RSS 2018)
BSD 3-Clause "New" or "Revised" License
485 stars 139 forks source link

How the network transforms the angle image into a specific angle? #4

Closed lx-onism closed 5 years ago

lx-onism commented 5 years ago

I know the ggcnn is input with a depth image and outputs three images. But I'm not sure how to use the three images to guide the robot when the robot need to grasp a specific part of something detected by it. After all, one angel is needed when grasping,not an image.

dougsm commented 5 years ago

Hello, Each pixel in the images represents a pose in 3D space. To chose the grasping pose, we simply choose the pixel with the highest quality value, and create a pose from the corresponding depth and angle. You can see this implemented for a Kinova robot in https://github.com/dougsm/ggcnn_kinova_grasping/blob/master/ggcnn_kinova_grasping/scripts/run_ggcnn.py#L135