Open MamoonNomaan opened 9 years ago
It really does depend how you formulate the VO pose measurement. If you handle it as a global pose measurement, the estimated pose will follow it fairly closely. If you introduce it as a relative pose measurement (assuming a keyframe-based VO approach), the IME can help. Please refer to S. Lynens paper on MSF for details.
Concerning scale estimation: Adding additional measurements can help: e.g. by using wheel odometry during init phase or by using a depth sensor to get the scale directly into the VO (e.g. see Scaramuzzas paper on throw and go uav systems) On Oct 27, 2015 12:14 PM, "MamoonNomaan" notifications@github.com wrote:
Hi! I am using this algorithm to implement EKF on an unmanned ground vehicle using 6DOF IMU and visual odometry through a front facing monocular camera. Determining scale and and catering visual drift in vision data are two important part of the task. The algorithm is kind of working properly giving correct orientation estimate and converging gyro bias. The problem I am facing right now is determining correct scale for a power on and go system and if I my vehicle is making a circle in a room starting from point A and coming back to point A I expect the position estimate from fused IMU and visual odometry to be at point A but I am getting the same drift as I have in visual odometry.
Any input on this topic as how to increase accuracy and deal with the scale will be highly appreciated.
thanks in advance
— Reply to this email directly or view it on GitHub https://github.com/ethz-asl/ethzasl_msf/issues/126.
@omaris thank you for your valuable input.
Hi! late follow up on this topic, been busy in few other aspects of my project. I have managed to get this algorithm working very efficiently on my unmanned ground vehicle in forward looking mode. Using depth sensor to calculate initial visual scale helped a lot, but I believe there is still room for improvement in scale estimation. @omaris you helped me with your valued input. Thanks for that. Can you point me in a direction as to how can I improve this scale estimation. @simonlynen thanks for sharing this algorithm. Any input from your side will be highly appreciated. Thanks in advance.
Hi! I am using this algorithm to implement EKF on an unmanned ground vehicle using 6DOF IMU and visual odometry through a front facing monocular camera. Determining scale and and catering visual drift in vision data are two important part of the task. The algorithm is kind of working properly giving correct orientation estimate and converging gyro bias. The problem I am facing right now is determining correct scale for a power on and go system and if I my vehicle is making a circle in a room starting from point A and coming back to point A I expect the position estimate from fused IMU and visual odometry to be at point A but I am getting the same drift as I have in visual odometry.
Any input on this topic as how to increase accuracy and deal with the scale will be highly appreciated.
thanks in advance