moveit / moveit_calibration

Hand-eye calibration tools for robot arms.
BSD 3-Clause "New" or "Revised" License
142 stars 79 forks source link

Eye-to-hand: tf frame tree correctness #115

Closed alexswerner closed 2 years ago

alexswerner commented 2 years ago

Hi,

this is definitely a very nice addition to moveit. The tight integration within Rviz allows even the novice user to quickly learn (at least on a high level) how to perform a calibration. I experienced a couple of minor issues in my setup (Panda+Realsense) on ROS noetic. This error was found in revision ff6918e.

When running the calibration proceedure in the eye-to-hand mode, the plugin settings are the following: Sensor Frame: camera_color_optical_frame Object_frame: handeye_target End-effector frame: ee_link Robot_base_frame: base_link

The plugin then publishes the transform e.g. base_link -> cam_color_optical_frame as a non-static-transform. This however contradicts how TF is supposed to work, at least how I understand it. TF only supports tree-like structures (e.g. no frames with two parents). http://wiki.ros.org/tf2/Design#The_direction_of_the_graph_can_be_confusing kind of explains this.

Concretely speaking, the issue is the plugin exports base_link->cam_color_optical_frame and the camera node (e.g. realsense) exports cam_link->cam_color_optical_frame (as a static transform). Hence, cam_color_optical_frame has two parents. During execution of the calibration procedure things work out and e.g. the pointcloud can be correctly displayed. However when exporting the transform into a launch file to use the calibration after calibration this fails and tf generates a bunch of warnings about this.

Is this expected, considered a bug or is the configuration incorrect?

nickswalker commented 2 years ago

After looking at this tool for a similar application, I think it may already do what you want it to. From this discussion

Which sensor frame should be selected? I assume FOV will project the camera image into the scene, but we could not get this to work, no matter which sensor_frame was selected. We could see the images in rqt_image_view, however, and /handeye_calibration/image_detection republished the frames in black and white.

The sensor frame should be the root frame of the camera system, such as "camera_link" for most cameras.

Which implies that you will get the transform you expect, <robot_base_frame> -> <sensor_frame> for robot_base_frame = base_link and sensor_frame=cam_link. This is confusing. I think most people would expect that sensor_frame has to be the optical frame of the images.

Edit: I see that the moveit tutorial is also misleading:

The "Sensor frame" is the camera optical frame (using the right-down-forward standard, as specified in REP 103).

This should be clarified.

nickswalker commented 2 years ago

See also https://github.com/ros-planning/moveit_calibration/pull/58#issuecomment-762422033:

There is a good use case for selecting something other than the optical frame for the sensor frame, which is when the optical frame is already part of a larger structure--for instance, there's a cam_base that then has the optical frame as a child, and then you want to calibrate the transform from robot_eef to cam_base, to attach the whole model to the robot. Then you can select cam_base as the sensor frame. (This is the case with RealSense cameras, since the realsense2_camera package comes with a URDF with several frames in it, including the optical frames for both IR cameras and the RGB camera, and the bottom screw attachment point, etc.)

alexswerner commented 2 years ago

Not sure if i am totally following what is going on in the other bug. #58 might only cover the Eye-to-hand case while this is about Eye-in-Hand.

JStech commented 2 years ago

The sensor frame selection works the same for both eye-in-hand and eye-to-hand.