raulmur / ORB_SLAM2

Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
Other
9.26k stars 4.69k forks source link

EuRoc 's ATE is worse than the paper #270

Open cheaster opened 7 years ago

cheaster commented 7 years ago

Does anyone test the EuRoc? ATE I run is above 6cm, which is worse than the paper's 3cm

IQ17 commented 7 years ago

Hi I have the same experience that EUROC errors are larger than in the paper, for all sequences. I don't know why at the moment. But I notice that the author used a modified python script to calculate ATE, as given here https://github.com/raulmur/evaluate_ate_scale ; However, even with this modified python script, I cannot get the error as low as in the paper. Any suggestions are highly appreciated.

nskyzone commented 7 years ago

Hi,@cheaster @IQ17 How to use EUROC ground-truth and the CameraTrajectory.txt to getthe ate error? tum benchmark can not use directly........ I'm a new guy in SLAM ,and nobody to ask...... Thanks!

IQ17 commented 7 years ago

Hi @pigbreeder

Actually, the author provides the code to output the trajectory in the format of the TUM benchmark. Please check SLAM.SaveTrajectoryTUM("CameraTrajectory.txt"); there

The TUM benchmark provides python2 script here and here, or you can use the modified version

Then all you need to do is run the python script:)

Toumi0812 commented 7 years ago

Hi everyone, @pigbreeder see #364 @IQ17 @cheaster how did compute ATE for EuRoC? I used ./evaluate_ate_scale.py MH01/mav0/state_groundtruth_estimate0/data.csv KeyFrameTrajectory.txt (First file is groundtruth, second file is estimated trajectory). Thanks

mattmyne commented 7 years ago

I've been testing the EuRoC V1_02_medium dataset, and am also seeing worse translation RMSE. I get an error of 0.065 compared with the ORB_SLAM2 paper's 0.020.

I'm using the current, unmodified (except to turn the GUI off) ORB_SLAM2 code. I get similar results using evaluate_ate_scale.py or evaluate_ate.py (as I'd expect for a stereo system!). For me the ORB_SLAM2 output file is CameraTrajectory.txt (not KeyFrameTrajectory.txt mentioned by @Toumi0812 ). I modified associate.py slightly to support scaling timestamps, and scale the data.csv timestamp by 1e-9 to match the CameraTrajectory.txt timestamps.

Results:

evaluate_ate.py --verbose ~/Downloads/mav0/state_groundtruth_estimate0/data.csv CameraTrajectory.txt compared_pose_pairs 1593 pairs absolute_translational_error.rmse 0.064770 m absolute_translational_error.mean 0.062399 m absolute_translational_error.median 0.062048 m absolute_translational_error.std 0.017366 m absolute_translational_error.min 0.014989 m absolute_translational_error.max 0.112062 m

(with evaluate_ate_scale.pty I get a scale of 1.011233 and absolute_translational_error.rmse of 0.061614 m)

Any suggestions to obtain the paper's results would be welcome - I would like to make sure I'm not missing something!

Toumi0812 commented 7 years ago

@mattmyne, How did u get CameraTrajectory.txt? I have as output only KeyFrameTrajectory.txt (just 100/200 keyframes). How many frames poses in CameraTrajectory.txt?

Thanks

mattmyne commented 7 years ago

@Toumi0812 stereo_euroc.cc in the Examples/Stereo directory calls SLAM.SaveTrajectoryTUM("CameraTrajectory.txt") as it's last function before returning. SLAM is a System class. For the V1_02_medium dataset 1612 poses are exported. How are you generating KeyFrameTrajectory.txt and are the RMSE values closer to the paper for these?

ghost commented 2 years ago

I have same problem with V1_02_medium dataset. Does this problem solved? If it's solved, please let me know the solution.

Thanks