ADVRHumanoids / concert_resources

Module description files and meshes of the CONCERT robot platform. Used by https://github.com/ADVRHumanoids/modular_hhcm
GNU General Public License v3.0
0 stars 2 forks source link

drill camera yaw rotation calibration #7

Open lucarossini-iit opened 6 months ago

lucarossini-iit commented 6 months ago

With the help of @MarcoRuzzon we found out that the drill camera publishes a completely different pointcloud w.r.t the front lidar. current tranform is <origin rpy="0.05 0.22 0.05" xyz="0.12 0.0 0.26"/>

image

removing 0.15 rad to the yaw rotation we obtain a pointcloud better aligned with the front lidar one. new transform is <origin rpy="0.05 0.22 -0.10" xyz="0.12 0.0 0.26"/>

image

Of course, for this calibration we assume the front lidar to be in a more precise position compared to the drill camera. We should pay attention every time we mount and dismount the drill camera

lucarossini-iit commented 6 months ago

Update: the yaw rotation on the drill camera was injecting error in the blob detection. We should investigate which of the two point clouds is the right one and adjust the other one. From a first investigation, it seems that the right position is given by the drill camera, while the right orientation comes from the lidar.

lucarossini-iit commented 6 months ago

UPDATE:

The ones that need calibration seem to be the LIDARs, not the drill camera. Each lidar is attached to the concert frame by a single screw, no pins at the moment, this can by the cause of some misalignments.

WHAT WE DID:

  1. Mount only a pitch module and the drill module.
  2. Rotate manually the front lidar to match its point cloud with the one of the the drill camera as much as possible. The frame of the drill camera was not modified.
  3. Modify the yaw of the front lidar frame to further increase the matching between its point cloud with the one of the the drill camera.
  4. Modify the yaw of the back lidar frame to match its point cloud with the one of the front lidar.

UPDATED LIDARs FRAMES:

  <link name="VLP16_lidar_front_base_link"/>
  <joint name="VLP16_lidar_back_base_mount_joint" type="fixed">
    <origin rpy="0.0 0.0 3.161593" xyz="-0.5305 -0.315 -0.1"/>
    <parent link="mobile_base"/>
    <child link="VLP16_lidar_back_base_link"/>
  </joint>
  <link name="VLP16_lidar_back_base_link"/>
  <joint name="ultrasound_fl_sag_base_mount_joint" type="fixed">
    <origin rpy="0.0 1.963496 0.0" xyz="0.5105 0.315 -0.129"/>
    <parent link="mobile_base"/>
    <child link="ultrasound_fl_sag_base_link"/>
  </joint>

POINT CLOUDS COLORS:

Orange: camera Green: front lidar Red: back lidar

ALIGNMENT DRILL CAMERA - FRONT LIDAR

image

ALIGNMENT FRONT LIDAR - BACK LIDAR

image

SETUP PICTURE FROM DRILL CAMERA:

image

FYI: @lucarossini-iit @alaurenzi @EdoardoRomiti @MarcoRuzzon

lucarossini-iit commented 6 months ago

With @EdoardoRomiti, we mounted the 6 dof robot with 2 long passive links

image

We confirmed that the drill camera point cloud is the most accurate of the three by comparing the alignment of the drillbit with the wooden structure in the real world with the corresponding rviz representation.

We rotated the front lidar frame in the urdf even more to increase its point cloud alignment with the one of the the drill camera. We also rotated the back lidar frame to align its point cloud to the one of the front lidar. The orientation is correct, but there is a significant translation offset.

 <joint name="VLP16_lidar_front_base_mount_joint" type="fixed">
    <origin rpy="0.0 0.0 0.05" xyz="0.5305 0.315 -0.1"/>
    <parent link="mobile_base"/>
    <child link="VLP16_lidar_front_base_link"/>
  </joint>
  <link name="VLP16_lidar_front_base_link"/>
  <joint name="VLP16_lidar_back_base_mount_joint" type="fixed">
    <origin rpy="0.0 0.0 3.201593" xyz="-0.5305 -0.315 -0.1"/>
    <parent link="mobile_base"/>
    <child link="VLP16_lidar_back_base_link"/>
  </joint>

image

image