IFL-CAMP / easy_handeye

Automated, hardware-independent Hand-Eye Calibration
Other
832 stars 217 forks source link

calibrate.launch, tracking_base and tracking_object frame #54

Closed mbajlo closed 4 years ago

mbajlo commented 4 years ago

Hello,

I am working with the variety

In the readme.md it is explained to make handeye transform I should:

  1. frames

create a new handeye_calibrate.launch file, which includes the robot's and tracking system's launch files, as well as easy_handeye's calibrate.launch as illustrated below in the next section "Calibration" in each of your launch files where you need the result of the calibration, include easy_handeye's publish.launch as illustrated below in the section "Publishing"

  • Calibration For both use cases, you can either launch the calibrate.launch launch file, or you can include it in another launchfile as shown below.
    <!-- fill in the following parameters according to your robot's published tf frames -->
    <arg name="robot_base_frame" value="/base_link"/>
    <arg name="robot_effector_frame" value="/ee_link"/>

In the robot_base_frame and in the robot_effector_frame I should add my robots base frame and end effector frame, these are published by the robot_state_publisher under /tf. This is clear to me, but the part with tracking_base_frame and tracking_marker_frame

    <!-- fill in the following parameters according to your tracking system's published tf frames -->
    <arg name="tracking_base_frame" value="/optical_origin"/>
    <arg name="tracking_marker_frame" value="/optical_target"/>

I do not understand where are these coming from? Should I have some tracking node which is subscribed to the camera topic and it is publishing these frames under /tf like the robot frames?

  1. rqt_easy_handeye package is used to move the robot automatically or by hand. and in each position calibration will make the calculation/estimation?

  2. Publishing publish.launch is publishing transformation in the /tf what transform? Transform of end effector frame in the camera frame? if it is end effetor in the camera frame, what is this used for? to see the transform in each step? and how it is getting more and more accurate with each new pose?

marcoesposito1988 commented 4 years ago

Hello @mbajlo,

  1. yes, you should have a tracking software running, that is able to publish on /tf the transformation between the camera optical frame and an object of interest. A typical example are AR markers, which are designed so that it is possible to compute this transformation based on the image (and the intrinsic calibration matrix of the camera). You can use the ArUco library for this, which was integrated into OpenCV. To this end, you can use my package easy_aruco.

You should

  1. rqt_easy_handeye contains a UI for sampling the transformations from tf, and a separate UI to move the robot around the initial position. The latter is optional, you can also move the robot by hand around. You need to acquire multiple samples, rotating the end effector in all directions (x,y,z). After you moved the robot in one direction, you need to wait until everything in rviz has stopped moving (because of eventual lag), and then you can acquire a sample; then continue, until you have enough samples.

  2. the publisher will transform only your calibration transformation: the transformation between the robot base and the camera frame in case of an "eye on base" calibration, or between the robot end effector and the camera for an "eye in hand" calibration. This is the transformation that you don't know exactly, and that you want to find out with the calibration procedure.

The more samples you acquire, the better your calibration (unless you introduce outliers into the samples). For more details, please refer to the Tsai-Lenz algorithm.

mbajlo commented 4 years ago

Hello @marcoesposito1988,

First of all, thank for the reply. I will take a look into the easy_arucopackage to publish the transform between the camera and the marker. The idea here was to follow the VISP camera calibration (intrinsic and extrinsic) and after that to make PBVS and IBVS. I am trying to understand this because I will try it with a lot of different robots in a simulation. But I need to follow a procedure like I have a real robot, because after simulation I will do the same thing with the actual robot. But just for my understanding, I have few questions:

after saving the intrinsic calibration this will be automatically used by the rest of the ROS packages.

The intrinsic parameters are saved in some file? how the rest of the ROS packages know where is this file and which other ROS packages should use the intrinsic calibration parametes?

EDIT: 25.3.2020 I tried charuco tracker. The question is, does the orientation of the XYZ of the charuco board is relevant, this refers to the question from before, does the Z-axis has to look towards the camera? From what I have seen, when I move the charuco board in Y direction, in rviz it is moving in X direction, this is not ok, or maybe I am doing something wrong. I keep getting this warning:

[ WARN] [1585174483.910026617]: The input topic '/fanuc_1_eye_on_hand/camera_object' is not yet advertised [ WARN] [1585174483.921283663]: The input topic '/fanuc_1_eye_on_hand/world_effector' is not yet advertised

Where this topics should be advertised from?

Thanks

mbajlo commented 4 years ago

Hello @marcoesposito1988,

I have figured it out, thanks ;)