IFL-CAMP / easy_handeye

Automated, hardware-independent Hand-Eye Calibration
Other
800 stars 214 forks source link

Trouble while using the calibration matrices #134

Open Theopetitjean opened 5 months ago

Theopetitjean commented 5 months ago

Hello, Thank you for making your code public. I'm having some problems using the calibration. Here are the details, I hope you can help me!

I'm trying to calibrate a Fanuc Cr7ial robot with a Kinect camera. Here is the launch file used to launch Easy-hand-eye:

<?xml version="1.0" ?>
<launch>

    <arg name="namespace_prefix" default="fanuc_kinect_handeyecalibration" />
    <arg name="eye"             default="left"/>

   <!-- Démarrer Kinect (start the Kinect_v1/Xtion with openNI)-->

    <include file="$(find openni_launch)/launch/openni.launch" >

        <arg name="depth_registration" value="true" />

    </include>

   <!-- start ArUco -->
    <node name="aruco_single" pkg="aruco_ros" type="single">
        <remap from="/camera_info" to="/camera/rgb/camera_info" />
        <remap from="/image" to="/camera/rgb/image_rect_color" />
        <param name="image_is_rectified" value="true"/>
        <param name="marker_size"        value="0.14"/>
        <param name="marker_id"          value="26"/>
        <param name="reference_frame"    value="camera_link"/>
        <param name="camera_frame"       value="camera_rgb_optical_frame"/>
        <param name="marker_frame"       value="camera_marker" />
    </node>

   <!-- Start the CR7IAL FANUC robot -->

    <include file="$(find fanuc_cr7ial_moveit_config)/launch/moveit_planning_execution.launch">

        <arg name="sim" value="false"/>
        <arg name="robot_ip" value="192.168.0.9" doc="Adresse IP du robot Fanuc CR-7iAL" />

    </include>

   <!-- start easy_handeye -->
    <include file="$(find easy_handeye)/launch/calibrate.launch" >
        <arg name="namespace_prefix" value="$(arg namespace_prefix)" />
        <arg name="eye_on_hand" value="false" />

        <arg name="robot_base_frame" value="base_link" />
        <arg name="robot_effector_frame" value="tool0" />
        <arg name="tracking_base_frame" value="camera_link" />
        <arg name="tracking_marker_frame" value="camera_marker" />

        <arg name="freehand_robot_movement" value="false" />
        <arg name="robot_velocity_scaling" value="0.5" />
        <arg name="robot_acceleration_scaling" value="0.2" />

    </include>

</launch>

While launching this fine the robot seems to do the work as expected. Having some result such as :

image

Sometime the detection of the aruco marker is lost but it appear that i should probably stick it to a solid suport cause sometime it get some deformation. We then obtain the sample such as : (while writing this im expecting if you'r supose to take sampe between each poses ? should i do this ?

image

It seems like i do smtg in the wrong way but i've not been able to find out what. What should i do to properly use this code ? Do i forget something while using the package ?

My main objective is to find out the calibration matrices in order to transform the detected information from the caméra in and reachable pose for the robot. To make thing short, after using the calibration package, i get both the quaternion and translation to apply direction the transform matrices to the xyz point obtain while using the camera. So what should be done to get thoses correct tranform R|t matrices ? do i forgot smtg while using the easyhandeye process ?

Theopetitjean commented 5 months ago

Okay, hello again,

Here is un update on my problem. While taking the sample and computing the resultats i've wondered somthing about the suposed position of my marker.

image

AS we can see on this image, the marker is detected to be down on the rviz visualizer. But in fact it should be near Tool0 of the robot since the marker is monted on the tool of the robot. Is that a problem regarding to the final calibration ? Can it be fixed ? It apear that the marker position is somehow shifted by a 90 degree angle ....

Thanks alot for your support !

Theopetitjean commented 5 months ago

Hello again, I decide to ignore the previous error and just keept going. I obtained the folowing results :

  eye_on_hand: true
  freehand_robot_movement: false
  move_group: manipulator
  move_group_namespace: /
  namespace: /fanuc_kinect_handeyecalibration_eye_on_hand/
  robot_base_frame: base
  robot_effector_frame: tool0
  tracking_base_frame: camera_depth_optical_frame
  tracking_marker_frame: camera_marker
transformation:
  qw: 0.6729530917701868
  qx: -0.03934630893158204
  qy: 0.016416276976428446
  qz: 0.7384554895866374
  x: 0.0344229908306081
  y: -0.078848650464033
  z: -0.056035976895112756

I suposed that the results for rotation was expressed as a quaternion, is that correct ?

I am now wondering about the usage of this calibration matrix. I have a kinect camera and a Yolo object detector that provide me the center of a bounding box from the RGB image. SO i do get X and Y corrdinates from the RGB image. Then thanks to the kinect sensor i get the corresponding Z from the raw data of the depth sensor. I then apply the calibration matrix to the so obtained 3D point such as :

def CoordonneesDeB(Ptx, Pty,Ptz,T_values, R_values):

    MatRotation= Ro.from_quat(R_values)
    A = [Ptx,Pty,Ptz]
    B = MatRotation.apply(A) + T_values
    # B = np.dot(MatRotation, A) + T
    print('la valeur de B est :')
    print(B)

    text = "B: ({}, {}, {})".format(B[0], B[1], B[2])

    return B[0], B[1], B[2]

SO here my question : Should I indeed input the Z coordinate obtained from the depth sensor while calculating the new coordinates of my point or should i do somthing like [X, Y, 0] ? what is after the calibration got obtained the right way to use it ? should i provide only the X and Y from the RGB sensor or should i give the Entire 3D point ?

I have much trouble using this results so if you can provide me any backup on it I would apreciate alot !

Thanks for the hard work !

marcoesposito1988 commented 5 months ago

Hi @Theopetitjean,

I'm not sure I have the full picture of your current status, but here a couple of points that caught my attention:

Theopetitjean commented 4 months ago

Hi @marcoesposito1988,

First of all, thank you for your very detailed answer.

In the meantime I've continued to experiment in order to obtain a usable calibration but unfortunately I'm still stuck. I freely admit that I must be making a lot of mistakes as I'm not particularly comfortable with ros. So here's exactly where I'm at:

Here is my launch:

<?xml version="1.0" ?>
<launch>

    <arg name="namespace_prefix" default="fanuc_kinect_handeyecalibration" />
    <arg name="eye"             default="left"/>

   <!-- Démarrer Kinect (start the Kinect_v1/Xtion with openNI)-->

    <include file="$(find openni_launch)/launch/openni.launch" >

        <arg name="depth_registration" value="true" />

    </include>

   <!-- start ArUco -->
    <node name="aruco_single" pkg="aruco_ros" type="single">
        <remap from="/camera_info" to="/camera/rgb/camera_info" />
        <remap from="/image" to="/camera/rgb/image_rect_color" />
        <param name="image_is_rectified" value="true"/>
        <param name="marker_size"        value="0.192"/>
        <param name="marker_id"          value="26"/>
        <param name="reference_frame"    value="camera_link"/>
        <param name="camera_frame"       value="camera_optical_rgb_frame"/>
        <param name="marker_frame"       value="camera_marker" />
    </node>

   <!-- Start the CR7IAL FANUC robot -->

    <include file="$(find fanuc_cr7ial_moveit_config)/launch/moveit_planning_execution.launch">

        <arg name="sim" value="false"/>
        <arg name="robot_ip" value="192.168.0.9" doc="Adresse IP du robot Fanuc CR-7iAL" />

    </include>

   <!-- start easy_handeye -->
    <include file="$(find easy_handeye)/launch/calibrate.launch" >
        <arg name="namespace_prefix" value="$(arg namespace_prefix)" />
        <arg name="eye_on_hand" value="true" />

        <arg name="robot_base_frame" value="base" />
        <arg name="robot_effector_frame" value="tool0" />
        <arg name="tracking_base_frame" value="camera_link" />
        <arg name="tracking_marker_frame" value="camera_marker" />

        <arg name="freehand_robot_movement" value="false" />
        <arg name="robot_velocity_scaling" value="0.5" />
        <arg name="robot_acceleration_scaling" value="0.2" />

    </include>

I have a few questions about this launch because some of the points are a bit unclear to me:

  1. When loading this parameter, as shown as an example in your code, I load:

    <param name="reference_frame" value="camera_link"/>
    
    <param name="camera_frame "value="camera_optical_rgb_frame"/>

Does this give the origin of the Aruco marker points? A point detected by the RGB camera later should therefore be expressed in a similar way using the same origin? is this correct? From my understanding, the camera-Optical-rgb-Frame should be used while using a kinect camera but why ? What is the main diffrence with the other parameter also available Camera-rgb-frame ?

  1. In a similar way, when loading the robot origin:
    <arg name="robot_base_frame" value="base" />
    <arg name="robot_effector_frame" value="tool0" />

    My question is as follows: The Fanuc cr7ial is a collaborative robot, so it has two origins, Base_Link and Base. Base Link is located on the effort sensor, while Base is located on the base of the robot's first axis. It would seem that most of the time on the fanuc side, the origin expressed by the robot is Base. In this respect, should I use Base or Base-Link? Have you ever had similar cases?

Then I do the calibration, taking between 5 and 10 samples depending on their quality. The first position is always well in front of the tag, for the following we use the movements proposed in the UI that you provided.

How can I find the orientation of my camera marker? Because in the Rviz feedback it seems to me that the Aruco tag reverses X, although I'm not sure of this point.

Anyway, we create the following transformation grid using Tzai-Lenz. Of course, this result remains uncertain in view of the various issues I raised earlier.

image

Here are my questions about the use of such results. So first of all, i'm not using the Tf for a simple and perhaps slightly silly reason: I have no idea how to apply my vision algorithms properly in ros. I'm a PhD student in computer vision and robotics, as I said earlier, really isn't my forte. I knaw that i could launch the results in Tf, I've read the many example you give and go through many issues of this Git, but After doing so i have no idea how to use it with my own code. This is why i am trying to do it with python straight forward.

As indicated in my python code I detect an object via an AI, then I get the coordinates of the centre of this object (they are expressed as X[Width in pixel], Y[Height in pixel], and Z[distance from the 3d sensor]). So I get an X,Y,Z point in the camera frame. I transform the quaternion into a rotation matrix so that I can apply it to my point, then I do an operation such as : B = (R*point) + t

I have some questions about this new point. Is it an offset to be applied to the current co-ordinate of my end effector or is it the raw co-ordinate of my point in the robot frame?

I freely admit that I don't really understand why this doesn't work. I may have made a mistake but I can't find where ...

Sorry to write such a big message again, I'm trying to be as concise as possible :D Thank you again for your reply and your work.