jhu-lcsr / handeye_calib_camodocal

Easy to use and accurate hand eye calibration which has been working reliably for years (2016-present) with kinect, kinectv2, rgbd cameras, optical trackers, and several robots including the ur5 and kuka iiwa.
BSD 2-Clause "Simplified" License
543 stars 178 forks source link

pose estimation between moving machine's end effector and the camera mounted on the end effector #28

Open sanjaysswami opened 4 years ago

sanjaysswami commented 4 years ago

@ahundt Hey I have one small doubt. If you can help me with this one then it would be very helpful. I have one moving machine (consider like robot arm itself, but i have to control it with either matlab or joystick). I have attached one camera to its end effector. I need to do hand eye calibration to estimate the pose between the machine's end effector and the camera mounted on the end effector. I have following things with me.

  1. Pose A: translation (13) and rotation vector (13) between machine's base and machine's end effector, for 32 different positions
  2. Pose B: translation (13) and rotation vector (13) between camera mounted on machine's end effector and diamond charuco (for example https://www.google.com/search?q=diamond+charuco&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjfppa71YzmAhWOaFAKHfrrDK0Q_AUoAXoECAsQAw&biw=1489&bih=787#imgrc=dRBZ_O_LoXFjVM: ), found using opencv, for same 32 different positions (Diamond charuco is placed at fixed location). ***Note: I do not have these values in terms of ros message or not saved in rosbag format. My question is, can I use this package to find the pose between end effector and camera mounted on it with the information I have stored.
ahundt commented 4 years ago

sorry, what is a "Diamond charuco"? I think the answer is yes, but you may need to modify the code a bit to input the transforms manually and convert their format to the one needed by the algorithm.