lianghongzhuo / PointNetGPD

PointNetGPD is an end-to-end grasp evaluation model to address the challenging problem of localizing robot grasp configurations directly from the point cloud.
https://lianghongzhuo.github.io/PointNetGPD/
MIT License
327 stars 73 forks source link

Get GPG grasps but no good grasps #38

Open Twilight89 opened 3 years ago

Twilight89 commented 3 years ago

Hi, Liang, I just used kinect2grasp.py with my simulation robot enviroment. I got GPG grasps like below(I removed the table points by z-axis distance in the world coordinate, and the object is the cracker box in YCB dataset), it seems just ok. 屏幕截图_23 But in the terminal, after input the GPG grasps in the pointnet model, outputs 'Got 0 good grasps, and 21 bad grasps'. I don't know what problem caused this. Hope for your help~ Thanks a lot

qiushu-chen commented 3 years ago

@Twilight89 @lianghongzhuo : Hi! I have met almost the same problems. Before sending the pointcloud to the neural network, I did some segmentations on the file, so there should be very little information about the table top, just like what was asked to do so. And the final pointcloud we sent to the network was like this: 2021-04-23 11-14-44 的屏幕截图 And to deal with single objects, we also segmented the target object out, and did the sampling specially, like this: 2021-04-23 11-39-04 的屏幕截图 However, in both of the examples above, there were a few grasps but no good grasps. Is there anything wrong with our target pointcloud or objects? May you please show us some of the pointclouds you sent to the network? Or should we only deal with some specified datasets? Looking forward to your kind suggests.

lianghongzhuo commented 3 years ago

Hi, Could you try this point cloud? I saved a npy file from my setup, you can read it using numpy. pc.zip

qiushu-chen commented 3 years ago

@lianghongzhuo Thanks a lot! I will try this pointcloud.

qiushu-chen commented 3 years ago

@lianghongzhuo : Thanks for your data. By analysing this pointcloud, we got the following results: 2021-04-25 15-46-26 的屏幕截图 2021-04-25 15-51-30 的屏幕截图 2021-04-25 15-46-41 的屏幕截图 We have got 2 good grasps. Is there anything wrong with this result?

lianghongzhuo commented 3 years ago

Looks good. The different is that our point cloud are transformed to table_top frame, as shown in this figure. Screenshot from 2021-04-25 10-54-49 Another different is that we use camera_pose to estimate the norm direction of the poing cloud. So all the norm arm pointing out relative to the object.

qiushu-chen commented 3 years ago

@lianghongzhuo: Thanks for the kind advice! In our pointcloud, we have already transformed the coordinate to the table_top frame. However, we are not so sure about the camera_pose and the estimation of the norm direction. When you talk about camera_pose, do you mean campos in the kinect2grasp.py file?

Twilight89 commented 3 years ago

@qiushu-chen Hi, did you solve this problem? Recently I tried to use this algorithm to clustered scene, but the result was terrible. image I didn't remove the table points in this picture. When I remove table points, the grasp is going to collide to the table, and the result was also bad. image

Differently, I transformed the point cloud to robot base_link,do I have to transform to table_top? Hope for advice.

qiushu-chen commented 3 years ago

@lianghongzhuo @Twilight89 In fact I am not so sure about this result. However, I highly recommend you to transform the coordinate system to the table_top. In my experiences, the base_link of the robot might be a little higher than the table_top, so a few parts of the point cloud might be treated as negative numbers, which may cause some problems in the network. And I think you might change the parameters of the gripper in the file params.json? This may lead to some differences. However, in my results, I cannot guarantee that there are good grasps. In fact, the problems still exist. So maybe we can do more researches on these problems.

Twilight89 commented 3 years ago

@qiushu-chen Thanks for reply. I'll try to transform to table_top coordinate. And I saw your result in your early reply was not very good, right? Because I see some grasps out of the object. image Is your result still like this or you have fixed this problem? Hope for reply, thanks^^

qiushu-chen commented 3 years ago

@Twilight89 According to my understanding, you may optimize the results by changing the parameters of the gripper. You can set the parameters similar to those of the gripper you are using in the real-world. However, as the program sample the point-pears by random in each attempt, some of the results may not be good. All of the grasps generated during the attemps are shown in the figure, but only good grasps evaluated by the network will be broadcasted.

Twilight89 commented 3 years ago

@qiushu-chen Thanks for your advice. I did use the same gripper as the author. But now I test in real robot, it is still all bad grasps. And I transform point cloud to camera(even I try to use camera on the table as table_top coordinate). But all I got were still bad grasps. And the scores of these grasps are too small. image Did you meet this kind of problem? Hope for your help.

Sincerely. ^^