Closed holst456 closed 3 years ago
Hi, you can refer to this issue for the first problem. For the second problem, the translation is the location of gripper center in camera coordinates, where the "center" is the origin of gripper coordinates (defined here). You can refer to this issue.
Thank you for your answer. I already have an object mask for each object, so I will try to use that instead. I have translated the grasp point forward in the gripper frame (the rotation and translation) by the depth-parameter and that seems to work pretty good. I am still unsure about how to understand the height parameter in relation to gripper frame. What does it represent the height of?
The "height" is the thickness of a gripper, which is used in collision detection. You can modify height according to your gripper.
The "height" is the thickness of a gripper, which is used in collision detection. You can modify height according to your gripper.
Okay thanks
@Fang-Haoshu @chenxi-wang Thanks for this publication as well as the open source code.
I have a question reagarding the resulting grasps. Using the pretrained weights and the demo code, the network will sometimes not return any grasps on the object: Do you have any idea how to make it work in this scenario?
Additionally, I do not understand how to use the grasp depth and height. I looked at this issue, but still do not get what this information represents.
Using a parallel gripper, shouldn't the translation and rotation be enough to determine where to grasp the object? I have implemented your code on a Universal Robots manipulator mounted with Realsense d435 and OnRobot parallel gripper, and when sending the TCP to the grasp translation and rotation, it always seem to be a few cm short of gripping the object correct. Could this be related to not using the depth and height information?
Best regards, Emil Holst