minwoo0611 / MA-LIO

Asynchronous Multiple LiDAR-Inertial Odometry using Point-wise Inter-LiDAR Uncertainty Propagation
GNU General Public License v2.0
287 stars 36 forks source link

Reproducing results in UrbanNav dataset #12

Closed jaehyungjung closed 9 months ago

jaehyungjung commented 9 months ago

Hi, thanks for the great work!

I'm trying to reproduce results in TABLE III in the paper using evo.

I'm using the ground-truth as SPAN-CPT where I converted the gronud-truth of Lat/Lon/Alt to local coordinate frame. Also, I'm using ground-truth rotation as roll/pitch/yaw in the ground-truth.

But, the rotation is not well-aligned between the ground-truth and estimated pose.

I was wondering how did you do the alignment when quantifying TABLE III?

Thanks for any advices!

minwoo0611 commented 9 months ago

Hello @jaehyungjung,

Thank you for reaching out!

Regarding the alignment issues you're encountering, it's a known challenge, especially with tools like evo or rpg in SLAM evaluation. These tools often focus on translational alignment and might not provide optimal rotational alignment.

For our MA-LIO Evaluation in TABLE III, we made the following adjustments:

  1. Adhering to the Right-Hand Rule for Rotations (Roll/Pitch/Yaw):

    • The output from SPAN-CPT7 uses the left-hand rule (with its heading derived from the ENU z-axis using the left-hand rule). To rectify this, we invert the yaw (heading) by multiplying it by -1.
  2. Mapping Measurements to the LiDAR Axis:

    • SPAN-CPT7's output is based on measurements in the ENU coordinate system. Given that our anticipated ground truth is the LiDAR trajectory in this system, we combined the corrected SPAN-CPT7 data with the extrinsic calibration between LiDAR (or IMU) and SPAN CPT-7.
  3. Initiating Orientation Synchronization:

    • While the corrected ground truth depicts the LiDAR trajectory in the ENU system, algorithm outputs often start with identity transformations, characterized by (roll, pitch, yaw) = (0,0,0). It's essential to synchronize these initial values.
      • If the initial difference between ground truth and trajectory output is represented as T_diff with values (15,20,25), the corrected transformation for the next pose can be defined as:
        T_corrected[i+1] = (T_diff.inverse() * (T_i.inverse() * T_[i+1]) * T_diff * T_corrected[i])
      • It's important to note that this approach provides a general alignment trend rather than precisely minimizing all rotational errors. It's beneficial for assessing how evaluated algorithms generally perform.
    • For a deeper dive into this process, please check this repository.

While these steps might seem detailed, they aim to streamline and enhance the evaluation's precision.

If there are any additional questions or areas of concern, please do reach out.

Best regards, Minwoo.

jaehyungjung commented 9 months ago

Hi Minwoo!

Thank you very much your kind answer.

I managed to align rotation thanks to your advice!

See you!

KkCabin commented 5 months ago

Hello, @minwoo0611, Thank you for your outstanding work! I have tried your solution in many scenarios and it really works, especially in scenarios with relatively fast motion. Recently I found some small problems while trying the UrbanNav dataset, as shown below, the LIO didn't succeed to go back to the origin (which is already much more accurate than the fast lio2). However, I see that the results in your paper work perfectly on this sequence, and I'm using the default parameters that you set on this dataset, so what adjustments need to be made? image