Closed DanielRoeder1 closed 2 years ago
Hm, I would say some small variation in the results might be expected, but a mean error >0.1m is definitely out of range. We have tested the estimate when moving to EVO before pushing it, and the error was in agreement with the paper. One option is too few computation resources, but then you would most likely not see the agent side error aligning with the results of the paper.
If you observe the logs on the server side when running your experiment, do you see a loop closure happening towards the end when the agent return to it's starting position? If for any reason this loop is not found on your deployment, this would significantly increase the trajectory error and probably explain the difference.
Also, I can see from the trajectory visualization that you included the initialization pattern, which we usually skipped (rosbag play MH_01_easy.bag --start 45
). Would you mind trying a run without this pattern?
After rebuilding CCMSLAM from scratch I was not able to reproduce this issue anymore. I will therefore close this issue for the time being.
Thanks for the response!
Thanks for the update on this!
I am trying to understand why my trajectory evaluation results differ significantly from the ones reported in the publication.
I am using the evaluation procedure as outlined in the readme (i.e. using evo_ape with KF_GBA_0.csv and gt.csv)
evo_ape output from EuRoc MH01: max 0.352780 mean 0.154116 median 0.133981 min 0.026507 rmse 0.180358 sse 12.621255 std 0.093687
How come these results differ so much from the trajectory error reported in the publication (0.061 RMSE [m])?
Edit: When adding code to save the frame trajectory (tracking thread) the trajectory error aligns with the results of the publication. This still leaves the question of why the KF trajectories result in a larger errors.