Closed Rudresh172 closed 3 months ago
Hey @MichaelGrupp, since I'm on a deadline for my thesis, I would be really grateful if you could answer my question soon. Thanks!
This not really possible to answer without knowing the data. I can have a look if you upload it, but I cannot guarantee that it's helping you with your deadline, here you also need to check your data yourself.
Hey @MichaelGrupp, thanks for getting back.
These are the original files taken out in the TUM format (85000-90000 poses) - ABC2.txt ABC4.txt
These are the files after motion filter and downsampling followed by adding 3 poses for Umeyama alignment saved in kitti format - ABC2_1mm_kitti.zip ABC4_1mm_kitti.zip
The problem is that after motion filtering, the number of poses vary drastically (810 vs 440 at 0.1m ; 3250 vs 2909 at 0.005m ; 4056 vs 3288 at 0.001m). Hence, when downsampled, very distant poses are compared.
Is there any alternative to downsampling that you might have encountered? Will changing the format or filtering method help in this case?
The original files have issues with their timestamps - there are a lot of duplicate timestamps, it's unsorted, and on top of that they have an absolute offset between each other.
Converting to KITTI and downsampling to the same number of poses does not fix this association problem. I assume you wanted to do this as a hack to be able to compare them directly...? The KITTI format without timestamps only makes sense if you can strictly control that the poses are exactly the same (e.g. one pose for each camera image from the KITTI dataset). You can't do this afterwards on arbitrary trajectory data.
Your initial issue that describes unexpected APE/RPE result is not surprising then, because the matching of poses is probably completely wrong.
Looks like you need to check how you record your data. The different runs should be using the same clock data (e.g. simulated time in ROS), otherwise you cannot compare them via timestamp matching.
The problem is that after motion filtering, the number of poses vary drastically
Motion filtering won't produce a consistent number of poses for, well, different motions (different trajectories). This is not unexpected.
I had used a custom script to extract the transforms and multiply them to get the map->base_footprint. Unfortunately, this has caused the duplicate timestamps.
Since I am comparing the trajectory on an actual robot, it is using the real time. Hence the time and duration of runs are different for each run. Also, the number of poses are different for each run.
So in order to use the evo package for comparing trajectories, is it a pre-requisite to either have equal time duration or equal number of poses?
My intention is to use the package to compare 25 trajectories to find the standard deviation ABC2.zip ABC2.z01.zip
So in order to use the evo package for comparing trajectories, is it a pre-requisite to either have equal time duration or equal number of poses?
The metrics here compare trajectories pose-by-pose. In order to do that precisely, it has to be known which poses are corresponding to each other. This can be either achieved by associating timestamps or by storing exactly the same number of corresponding poses (KITTI). If the input data (& time) used for SLAM is not the same in two trajectories, this doesn't make much sense.
You could theoretically still evaluate some other metrics based on a spatial matching method (nearest neighbours etc), but this is not implemented in evo.
Okay, I will figure out some other solution. Thanks a lot for your prompt responses!
I have multiple trajectories that I would like to compare to show variability of slam_toolbox iterations. Since I am performing along the same waypoints, but the number of poses are unequal, I have filtered the data in the following way -
Then I use the ape and rpe commands as
This gives me the following plots -
The reference has 16.940m path length and the other trajectory has 18.192m path length.
How exactly are the trajectories compared in this case? Shouldn't the red colour be on the arcs where the poses are deflecting the most?