Closed CarMineo closed 3 years ago
Hi @CarMineo It sounds as though you already know how to position and rotate the individual clouds to align them, using an affine transform. Instead, you want to be able to calculate the transform between the camera device and the flange. Is this correct please?
If you do need details of how to perform an affine transform, an example of one in librealsense is the instruction rs2_transform_point_to_point
https://github.com/IntelRealSense/librealsense/issues/5583
In regard to the flange: this reminds me of a tool that a RealSense team member wrote to work out the extrinsic calibration between a 400 Series camera and a T265 Tracking Camera on a custom mount. I wonder if it may be applicable to this situation if the flange represented the position on the mount where the T265 would have been.
https://github.com/IntelRealSense/librealsense/pull/4355
The use of gimballing with RealSense calibration for the 400 Series cameras was also recently discussed.
Hello MartyG,
thanks for your reply. Yes, I would like to use an automatic procedure to calculate the transform between the reference system of the camera, centred at the depth measurement origin, and the reference system of my robot flange. This transform is also known as "tool parameters" for the robot. I could estimate the tool parameters using the geometry CAD information of the D435i and the CAD model of the support I designed to mount the camera to the robot. However that would not be accurate, because the support and the real dimensions of the camera are affected by the manufacturing and assembly tolerances. What I am after is a procedure that enables me to calibrate the position of the camera measurement origin and the orientation of the camera, when it is mounted onto the robot flange by mean of the support I am using.
I am going to see the links you shared. Is there anything else that springs to your mind?
Many thanks, Carmelo
I carefully considered the information that you kindly provided. This is not one of my specialist areas of knowledge, but I wonder if what you are describing is hand-eye calibration, which can be used to get the transformation between the end-effector of a robot arm and the camera.
If that is the case, the links below have information about hand-eye calibration in regard to RealSense:
https://github.com/IntelRealSense/librealsense/issues/3569#issuecomment-475621533
https://support.intelrealsense.com/hc/en-us/community/posts/360051325334/comments/360013640454
Hi @CarMineo Do you require further assistance with this case, please? Thanks!
Hello Marty, thank you for the additional information. I am going through implementing the procedure for the hand-eye calibration, as you suggested. It should work to meet my requirements. Best regards, Carmelo
Okay, thanks very much for the update @CarMineo
Hi @CarMineo Do you require further assistance with this case, please? Thanks!
Hello @MartyG-RealSense,
thanks for checking how this is going. I have found a quite straightforward procedure to calibrate the centre of the RGB sensor of D435i. This is based on the MATLAB Camera Calibrator app (https://it.mathworks.com/help/vision/ref/cameracalibrator-app.html). I manipulate the D435i with a robotic arm to take 10 pictures of a chessboard pattern (https://github.com/opencv/opencv/blob/master/doc/pattern.png) from different positions. I record the colour frame and the robot positional feedback (the position of the robot flange on which the D435i is mounted) at every pose. The MATLAB Camera Calibrator allows me to compute the camera intrinsics, extrinsics, and lens distortion parameters. Thus it provides also the coordinates of the location where each colour frame was acquired. Then I can compare the transform between the locations of the RGB sensor and the robot flange, which gives the robot tool parameters I need. The only think it is still not clear to me is if the origin of the depth measured by the D435i coincides with the centre of the RGB sensor. If that is not the case, what is the link (the transform or the offset) between the centre of the RGB sensor and the depth origin?
Best regards, Carmelo
Hi @CarMineo Great news that you were successful. Thanks so much for sharing the details of your solution for the benefit of RealSense community members. :)
The origin point of the 400 Series camera is always the center of the left IR sensor. The link below explains the coordinate system relative to this origin point.
https://github.com/IntelRealSense/librealsense/issues/7279#issuecomment-689031488
Hi @CarMineo Do you require further assistance with this case, please? Thanks!
Case closed due to no further comments received.
Issue Description
Hello all, I apologise in advance if my problem is already covered in another issue. I have not been able to find a straightforward solution so far.
I need to manipulate a D435i camera through a robot to reconstruct the geometry of an object, by bringing the device to different locations around the object. I have been able to develop the software required to acquire a point cloud from D435i at every pose. However I need an accurate measure of the position of the device reference system (origin coordinates and Euler angles), in the robot absolute reference system, in order to translate and rotate each acquired point cloud. In other words, I need to find the fixed offset and rotation between the robot flange and the device reference system. I assume this should be done using a predefined object (e.g. a chessboard pattern in a fixed position) and a calibration routine to find the camera extrinsic parameters (??). Can anyone point me towards a documented well documented algorithm applicable to D435i?
Thanks, Carmelo