Open Toumi0812 opened 7 years ago
Hi @Toumi0812 ,
ORBS-SLAM2, like every monocular SLAM, can't get the scale, and suffer scale drifting.
If you want to compare with true positions, you have to gather them with another method. For example, in outdoor environments, many people get a Google Map view. In indoor environments, many get the building drawings. A manual work.
Many people are working in a solution to this problem, with landmarks, known reference points.
Thanks @AlejandroSilvestri, I would like to share what I did (May be useful for ORB_SLAM users). For EuRoC dataset I used https://github.com/raulmur/evaluate_ate_scale (Used for TUM RGB-D dataset). For example in MH01 sequence I modified MH01/mav0/state_groundtruth_estimate0/data.csv (deleting columns) to have just 8 columns like ORB_SLAM2 results (timestamp,3D,Quaternion) and deleting comma between columns and deleting the dot in timestamp of ORB_SLAM2 results (to have exactly the same format). Finally I executed ./evaluate_ate_scale.py MH01/mav0/state_groundtruth_estimate0/data.csv KeyFrameTrajectory.txt , It works I get results, but I would like to be sure whether is correct.
Thanks.
@AlejandroSilvestri, I think my solution works, I have got reasonable solution. You can test it, thank you anyway
Hi everyone, Do you know how to compare the keyframes poses with the truth ones? how can I identify the corresponding truth keyframes? How to solve the scale problem? @AlejandroSilvestri