MichaelGrupp / evo

Python package for the evaluation of odometry and SLAM
https://michaelgrupp.github.io/evo/
GNU General Public License v3.0
3.39k stars 745 forks source link

APE and RPE metrics #684

Closed Rudresh172 closed 1 month ago

Rudresh172 commented 1 month ago

I have multiple trajectories that I would like to compare to show variability of slam_toolbox iterations. Since I am performing along the same waypoints, but the number of poses are unequal, I have filtered the data in the following way -

  1. I have ~```````85000-90000 poses at the start. I use motion filter with 1mm and 1 degree threshold. This removes static poses thus reducing to ~3300-3500 poses
  2. Then I downsample to 3200 poses
  3. Since the Umeyama alignment method fails to align because all trajectories have inaccuracies in localisation at the start and end points, I add 3 points to give zero rotation and translation

Then I use the ape and rpe commands as

evo_ape kitti ABC4_1mm.kitti ABC2_1mm.kitti -va --plot --plot_mode xy --save_results results/ABC2_1mm_ape.zip --n_to_align 3
evo_rpe kitti ABC4_1mm.kitti ABC2_1mm.kitti -va --plot --plot_mode xy --save_results results/ABC2_1mm_rpe.zip --n_to_align 3

This gives me the following plots - Screenshot from 2024-07-04 22-50-43

--------------------------------------------------------------------------------
Loaded 3203 poses from: ABC4_1mm.kitti
Loaded 3203 poses from: ABC2_1mm.kitti
--------------------------------------------------------------------------------
Aligning using Umeyama's method...
Rotation of alignment:
[[ 1.00000000e+00 -1.21168839e-16  0.00000000e+00]
 [ 1.79380389e-16  1.00000000e+00  0.00000000e+00]
 [ 0.00000000e+00  0.00000000e+00  1.00000000e+00]]
Translation of alignment:
[ 5.42101086e-20 -5.42101086e-20  0.00000000e+00]
Scale correction: 1.0
--------------------------------------------------------------------------------
Compared 3203 absolute pose pairs.
Calculating APE for translation part pose relation...
--------------------------------------------------------------------------------
APE w.r.t. translation part (m)
(with SE(3) Umeyama alignment) (aligned poses: 3)

       max  1.215802
      mean  0.382501
    median  0.313359
       min  0.000000
      rmse  0.491725
       sse  774.464502
       std  0.309009

Screenshot from 2024-07-04 22-51-38

--------------------------------------------------------------------------------
Loaded 3203 poses from: ABC4_1mm.kitti
Loaded 3203 poses from: ABC2_1mm.kitti
--------------------------------------------------------------------------------
Aligning using Umeyama's method...
Rotation of alignment:
[[ 1.00000000e+00 -1.21168839e-16  0.00000000e+00]
 [ 1.79380389e-16  1.00000000e+00  0.00000000e+00]
 [ 0.00000000e+00  0.00000000e+00  1.00000000e+00]]
Translation of alignment:
[ 5.42101086e-20 -5.42101086e-20  0.00000000e+00]
Scale correction: 1.0
--------------------------------------------------------------------------------
Found 3202 pairs with delta 1 (frames) among 3203 poses using consecutive pairs.
Compared 3202 relative pose pairs, delta = 1 (frames) with consecutive pairs.
Calculating RPE for translation part pose relation...
--------------------------------------------------------------------------------
RPE w.r.t. translation part (m)
for delta = 1 (frames) using consecutive pairs
(with SE(3) Umeyama alignment) (aligned poses: 3)

       max  0.079604
      mean  0.002800
    median  0.001950
       min  0.000000
      rmse  0.004308
       sse  0.059435
       std  0.003274

The reference has 16.940m path length and the other trajectory has 18.192m path length.

How exactly are the trajectories compared in this case? Shouldn't the red colour be on the arcs where the poses are deflecting the most?

Rudresh172 commented 1 month ago

Hey @MichaelGrupp, since I'm on a deadline for my thesis, I would be really grateful if you could answer my question soon. Thanks!

MichaelGrupp commented 1 month ago

This not really possible to answer without knowing the data. I can have a look if you upload it, but I cannot guarantee that it's helping you with your deadline, here you also need to check your data yourself.

Rudresh172 commented 1 month ago

Hey @MichaelGrupp, thanks for getting back.

These are the original files taken out in the TUM format (85000-90000 poses) - ABC2.txt ABC4.txt

These are the files after motion filter and downsampling followed by adding 3 poses for Umeyama alignment saved in kitti format - ABC2_1mm_kitti.zip ABC4_1mm_kitti.zip

The problem is that after motion filtering, the number of poses vary drastically (810 vs 440 at 0.1m ; 3250 vs 2909 at 0.005m ; 4056 vs 3288 at 0.001m). Hence, when downsampled, very distant poses are compared.

Is there any alternative to downsampling that you might have encountered? Will changing the format or filtering method help in this case?

MichaelGrupp commented 1 month ago

The original files have issues with their timestamps - there are a lot of duplicate timestamps, it's unsorted, and on top of that they have an absolute offset between each other.

Converting to KITTI and downsampling to the same number of poses does not fix this association problem. I assume you wanted to do this as a hack to be able to compare them directly...? The KITTI format without timestamps only makes sense if you can strictly control that the poses are exactly the same (e.g. one pose for each camera image from the KITTI dataset). You can't do this afterwards on arbitrary trajectory data.

Your initial issue that describes unexpected APE/RPE result is not surprising then, because the matching of poses is probably completely wrong.

Looks like you need to check how you record your data. The different runs should be using the same clock data (e.g. simulated time in ROS), otherwise you cannot compare them via timestamp matching.

The problem is that after motion filtering, the number of poses vary drastically

Motion filtering won't produce a consistent number of poses for, well, different motions (different trajectories). This is not unexpected.

Rudresh172 commented 1 month ago

I had used a custom script to extract the transforms and multiply them to get the map->base_footprint. Unfortunately, this has caused the duplicate timestamps.

Since I am comparing the trajectory on an actual robot, it is using the real time. Hence the time and duration of runs are different for each run. Also, the number of poses are different for each run.

So in order to use the evo package for comparing trajectories, is it a pre-requisite to either have equal time duration or equal number of poses?

My intention is to use the package to compare 25 trajectories to find the standard deviation ABC2.zip ABC2.z01.zip

ABC4.zip ABC4.z01.zip

MichaelGrupp commented 1 month ago

So in order to use the evo package for comparing trajectories, is it a pre-requisite to either have equal time duration or equal number of poses?

The metrics here compare trajectories pose-by-pose. In order to do that precisely, it has to be known which poses are corresponding to each other. This can be either achieved by associating timestamps or by storing exactly the same number of corresponding poses (KITTI). If the input data (& time) used for SLAM is not the same in two trajectories, this doesn't make much sense.

You could theoretically still evaluate some other metrics based on a spatial matching method (nearest neighbours etc), but this is not implemented in evo.

Rudresh172 commented 1 month ago

Okay, I will figure out some other solution. Thanks a lot for your prompt responses!