ZhenghaoFei / visual_odom

This repository is C++ OpenCV implementation of Stereo Odometry
MIT License
269 stars 65 forks source link

Why the trajectory in three dimension drifts a lot? #13

Open mitul93 opened 3 years ago

mitul93 commented 3 years ago

Hi

I apologize to open an issue here. I have a question rather than an issue.

If you plot the 3D pose, the trajectory starts to drift sooner in Z direction very quickly! Could you help me to understand why it happens and how to solve it?

I am attaching a file that shows ground truth vs estimated trajectory. The line in orange is ground truth and the line in blue is trajectory.

trajectory

ZhenghaoFei commented 3 years ago

Hi mitul93,

You are welcome to post it here. I also noticed that the z-direction estimation has more drift, it might due to the FOV in the z-direction is narrower. I will look into some other reported results to see if this is a general problem or it is my code specific.

Thanks!

weiningwei commented 3 years ago

Hi I run this code on the 00 sequence of KITTI dataset. And I found that the Y-direction estimation also has more drift.I am attaching a picture that shows ground truth vs estimated trajectory. The line in green is ground truth and the line in blue is estimated trajectory.

image

ZhenghaoFei commented 3 years ago

Hi I run this code on the 00 sequence of KITTI dataset. And I found that the Y-direction estimation also has more drift.I am attaching a picture that shows ground truth vs estimated trajectory. The line in green is ground truth and the line in blue is estimated trajectory.

image

Hi weiningwei,

The y-direction "error" seems more due to your 3D view. Try to project the trajectory onto the x-y plane and plot to see if it gets better.

Best, Zhenghao

weiningwei commented 3 years ago

Hi I run this code on the 00 sequence of KITTI dataset. And I found that the Y-direction estimation also has more drift.I am attaching a picture that shows ground truth vs estimated trajectory. The line in green is ground truth and the line in blue is estimated trajectory. image

Hi weiningwei,

The y-direction "error" seems more due to your 3D view. Try to project the trajectory onto the x-y plane and plot to see if it gets better.

Best, Zhenghao

Hi, Zhenghao. Thank you for the reply.I project the trajectory onto the x-y plane and plot it. It also drifts a lot. To save the trajectory,I add some code on your original code. The code is results << fixed << setprecision(6) << frame_pose.at<double>(0,0) << " " << frame_pose.at<double>(0,1) << " " << frame_pose.at<double>(0,2) << " " << frame_pose.at<double>(0,3) << " " << frame_pose.at<double>(1,0) << " " << frame_pose.at<double>(1,1) << " " << frame_pose.at<double>(1,2) << " " << frame_pose.at<double>(1,3) << " " << frame_pose.at<double>(2,0) << " " << frame_pose.at<double>(2,1) << " " << frame_pose.at<double>(2,2) << " " << frame_pose.at<double>(2,3) << endl; I guess if the trajectory is saved wrong. image-xy

Best, weiningwei

mitul93 commented 3 years ago

Update :

I'm writing a master's thesis related to this topic. Hence, I opened up the issue in the first place. I implemented a VO algorithm using descriptor-based feature tracking instead of optical flow.

I have observed high drift on Z in both cases. I have solved the drift problem by using ORB extractor function from ORB-SLAM2 github repository (https://github.com/raulmur/ORB_SLAM2).

The ORB extractor from ORB-SLAM2 uses 30x30 grid to extract features evenly. This means that the tracked features are distributed evenly across the image. cv::ORB does not use a grid and I've observed that the ORB feature points are clustered which are far from the camera. The depth estimation in stereo camera for far features has more uncertainty in-depth estimation. This is the explanation that makes sense to me.

cv::ORB() vs ORB_SLAM2::ORBextractor() The following image shows feature points by cv::ORB() 000749Right

The following image shows feature points by ORB_SLAM2::ORBextractor(). 000749Right

The pose estimation result comparison

These results are from my thesis. They are not from this repository. I'm attaching the results just to give you an idea. The blue and green lines are with cv::ORB() function and the purple and pink lines are with ORB_SLAM2::ORBextractor(). Redline shows the ground truth. Note how the latter has a really good estimation in Z direction. VO2_VO4_ORB_ORB2_3000_XYZ

I the issue I opened in this repository can be solved by using ORB_SLAM2::ORBextractor() class. I have not verified it and I don't think I will have time to verify it. I'm just pointing this out as a suggestion.

ZhenghaoFei commented 3 years ago

Hi I run this code on the 00 sequence of KITTI dataset. And I found that the Y-direction estimation also has more drift.I am attaching a picture that shows ground truth vs estimated trajectory. The line in green is ground truth and the line in blue is estimated trajectory. image

Hi weiningwei, The y-direction "error" seems more due to your 3D view. Try to project the trajectory onto the x-y plane and plot to see if it gets better. Best, Zhenghao

Hi, Zhenghao. Thank you for the reply.I project the trajectory onto the x-y plane and plot it. It also drifts a lot. To save the trajectory,I add some code on your original code. The code is results << fixed << setprecision(6) << frame_pose.at<double>(0,0) << " " << frame_pose.at<double>(0,1) << " " << frame_pose.at<double>(0,2) << " " << frame_pose.at<double>(0,3) << " " << frame_pose.at<double>(1,0) << " " << frame_pose.at<double>(1,1) << " " << frame_pose.at<double>(1,2) << " " << frame_pose.at<double>(1,3) << " " << frame_pose.at<double>(2,0) << " " << frame_pose.at<double>(2,1) << " " << frame_pose.at<double>(2,2) << " " << frame_pose.at<double>(2,3) << endl; I guess if the trajectory is saved wrong. image-xy

Best, weiningwei

Hi weiningwei,

It looks like you are actually projecting to x-z plane (lets say z is the up-down in the real-world). Also you can check mitul93 's approach to use a more evenly distributed feature to get better results.

Best, Zhenghao

Hi I run this code on the 00 sequence of KITTI dataset. And I found that the Y-direction estimation also has more drift.I am attaching a picture that shows ground truth vs estimated trajectory. The line in green is ground truth and the line in blue is estimated trajectory. image

Hi weiningwei, The y-direction "error" seems more due to your 3D view. Try to project the trajectory onto the x-y plane and plot to see if it gets better. Best, Zhenghao

Hi, Zhenghao. Thank you for the reply.I project the trajectory onto the x-y plane and plot it. It also drifts a lot. To save the trajectory,I add some code on your original code. The code is results << fixed << setprecision(6) << frame_pose.at<double>(0,0) << " " << frame_pose.at<double>(0,1) << " " << frame_pose.at<double>(0,2) << " " << frame_pose.at<double>(0,3) << " " << frame_pose.at<double>(1,0) << " " << frame_pose.at<double>(1,1) << " " << frame_pose.at<double>(1,2) << " " << frame_pose.at<double>(1,3) << " " << frame_pose.at<double>(2,0) << " " << frame_pose.at<double>(2,1) << " " << frame_pose.at<double>(2,2) << " " << frame_pose.at<double>(2,3) << endl; I guess if the trajectory is saved wrong. image-xy

Best, weiningwei

Hi weiningwei,

It looks like you are actually projecting to x-z plane (lets say z is the up-down in the real-world). Also you can check mitul93 's approach to use a more evenly distributed feature to get better results.

Best, Zhenghao

ZhenghaoFei commented 3 years ago

Update :

I'm writing a master's thesis related to this topic. Hence, I opened up the issue in the first place. I implemented a VO algorithm using descriptor-based feature tracking instead of optical flow.

I have observed high drift on Z in both cases. I have solved the drift problem by using ORB extractor function from ORB-SLAM2 github repository (https://github.com/raulmur/ORB_SLAM2).

The ORB extractor from ORB-SLAM2 uses 30x30 grid to extract features evenly. This means that the tracked features are distributed evenly across the image. cv::ORB does not use a grid and I've observed that the ORB feature points are clustered which are far from the camera. The depth estimation in stereo camera for far features has more uncertainty in-depth estimation. This is the explanation that makes sense to me.

cv::ORB() vs ORB_SLAM2::ORBextractor() The following image shows feature points by cv::ORB() 000749Right

The following image shows feature points by ORB_SLAM2::ORBextractor(). 000749Right

The pose estimation result comparison

These results are from my thesis. They are not from this repository. I'm attaching the results just to give you an idea. The blue and green lines are with cv::ORB() function and the purple and pink lines are with ORB_SLAM2::ORBextractor(). Redline shows the ground truth. Note how the latter has a really good estimation in Z direction. VO2_VO4_ORB_ORB2_3000_XYZ

I the issue I opened in this repository can be solved by using ORB_SLAM2::ORBextractor() class. I have not verified it and I don't think I will have time to verify it. I'm just pointing this out as a suggestion.

Thanks a lot Mitul Vekariya! The evenly distributed ORB feature does improve the results a lot. The idea is similar to bucketing for an more even spatial feature distribution.