tnikolla / robot-grasp-detection

Detecting robot grasping positions with deep neural networks. The model is trained on Cornell Grasping Dataset. This is an implementation mainly based on the paper 'Real-Time Grasp Detection Using Convolutional Neural Networks' from Redmon and Angelova.
Apache License 2.0
232 stars 84 forks source link

How to convert the coordinates #23

Open VisionaryKai opened 5 years ago

VisionaryKai commented 5 years ago

Hi, I've achieved the grasping rectangle on my Jupiter notebook. But I want to use a real robotic arm to grasp item, I knew that inverse kinematics can convert the coordinates of the arm into the rotation angle of different parts of the arm. But I realize that the coordinates systems are different, in your project, it's based on the image, while in practice it's based on the robotic arm. How can I convert the system to make it work? Thanks a lot. 1535512761 1

edwardnguyen1705 commented 4 years ago

@VisionaryKai I assume that your robot arm has a vision system (a camera) either eye in hand or eye to hand. You should do calibration first, then you can transform a 2D point in the image coordinates to the robot arm coordinates, and the angle is for the gripper.

1458763783 commented 4 years ago

@VisionaryKai Hi, friends. Have you achieved your goal? I'd like to ask you some questions. thanks..