ethz-asl / ethzasl_msf

MSF - Modular framework for multi sensor fusion based on an Extended Kalman Filter (EKF)
Other
1.01k stars 437 forks source link

Expected Performance #126

Open MamoonNomaan opened 9 years ago

MamoonNomaan commented 9 years ago

Hi! I am using this algorithm to implement EKF on an unmanned ground vehicle using 6DOF IMU and visual odometry through a front facing monocular camera. Determining scale and and catering visual drift in vision data are two important part of the task. The algorithm is kind of working properly giving correct orientation estimate and converging gyro bias. The problem I am facing right now is determining correct scale for a power on and go system and if I my vehicle is making a circle in a room starting from point A and coming back to point A I expect the position estimate from fused IMU and visual odometry to be at point A but I am getting the same drift as I have in visual odometry.

Any input on this topic as how to increase accuracy and deal with the scale will be highly appreciated.

thanks in advance

omaris commented 9 years ago

It really does depend how you formulate the VO pose measurement. If you handle it as a global pose measurement, the estimated pose will follow it fairly closely. If you introduce it as a relative pose measurement (assuming a keyframe-based VO approach), the IME can help. Please refer to S. Lynens paper on MSF for details.

Concerning scale estimation: Adding additional measurements can help: e.g. by using wheel odometry during init phase or by using a depth sensor to get the scale directly into the VO (e.g. see Scaramuzzas paper on throw and go uav systems) On Oct 27, 2015 12:14 PM, "MamoonNomaan" notifications@github.com wrote:

Hi! I am using this algorithm to implement EKF on an unmanned ground vehicle using 6DOF IMU and visual odometry through a front facing monocular camera. Determining scale and and catering visual drift in vision data are two important part of the task. The algorithm is kind of working properly giving correct orientation estimate and converging gyro bias. The problem I am facing right now is determining correct scale for a power on and go system and if I my vehicle is making a circle in a room starting from point A and coming back to point A I expect the position estimate from fused IMU and visual odometry to be at point A but I am getting the same drift as I have in visual odometry.

Any input on this topic as how to increase accuracy and deal with the scale will be highly appreciated.

thanks in advance

— Reply to this email directly or view it on GitHub https://github.com/ethz-asl/ethzasl_msf/issues/126.

MamoonNomaan commented 9 years ago

@omaris thank you for your valuable input.

  1. Can you please elaborate the term IME? I am using keyframe-based VO.
  2. On a side note, any note on "fuzzy tracking"? as @simonlynen stated in another thread that "Given that the vision-measurement might not always be expressed in frame of reference which is gravity aligned, we estimate the rotation between the frame of reference of the vision measurements and the world frame of reference. This rotation estimate might change slowly over time, while large changes in this orientation estimate are usually a sign of a failure in the visual SLAM module. We therefore watch the rate of change on this estimate and trigger a warning message when the rate of change exceeds 0.1 rad/update. We then drop the update of the EKF and due pure IMU dead-reckoning (forward integration)."
  3. I believe it is helpful in terms of "bad tracking" but is there a way to deal with temporarily losing tracking?
  4. as written in the statement above "We then drop the update of the EKF and due pure IMU dead-reckoning (forward integration)." Isn't it going to bring huge drift? At least it does in my case.
MamoonNomaan commented 8 years ago

Hi! late follow up on this topic, been busy in few other aspects of my project. I have managed to get this algorithm working very efficiently on my unmanned ground vehicle in forward looking mode. Using depth sensor to calculate initial visual scale helped a lot, but I believe there is still room for improvement in scale estimation. @omaris you helped me with your valued input. Thanks for that. Can you point me in a direction as to how can I improve this scale estimation. @simonlynen thanks for sharing this algorithm. Any input from your side will be highly appreciated. Thanks in advance.