PX4 / PX4-Autopilot

PX4 Autopilot Software
https://px4.io
BSD 3-Clause "New" or "Revised" License
8.5k stars 13.51k forks source link

EKF2: GPS - vision fusion does not follow vision pose estimation #7408

Closed zalessio closed 6 years ago

zalessio commented 7 years ago

From a couple of months I am using a drone with Pixhawk and an onboard computer. Till now I had very few problems. I was using LPE to fuse GPS and vision pose (this is working really fine), but I wanted to try the EKF2.

I flashed, from QGroundControl, the latest stable version 1.6.2 on my pixhawk.

I tried this configuration in an environment with a not really good GPS signal, so even if the drone is not moving, the local state estimate is moving of several meters. What I want is that, whenever I have a pose from vision, the state estimate follows it instead of the GPS.

I did several experiments with the drone fixed, and the outcome was always a state estimate that follow the vision estimate in Z but the GPS in X and Y. If I disable the GPS on the EK2_AID_MASK then the state estimate follows the vision pose (as you can see in the first plot), but as soon as I re-enable the GPS then the vision pose is not trusted anymore (second plot).

outside_gps_vision2

outside_gps_vision3

I am sure that the vision pose arrives and is listened by the filter, so it sounds to me a problem of covariances, but I really tried a lot of setups with the same result:

Am I missing something? Some parameters to set in order to make the vision trusted more (as with EK2_HGT_MODE for the Z coordinate)?

mhkabir commented 7 years ago

@priseborough comment's on this would be appreciated. I've noticed the exact same behaviour which is why I stick with LPE for GPS/vision fusion use-cases. It would be great to get EKF2 ready to handle GPS denied and mixed scenarios properly.

priseborough commented 7 years ago

OK, I will have a look at it. I do not have the equipment here to do testing and gazebo time slips on my computer relative to the autopilot clock making the EKF results invalid, so simulation is not an option.

The lack of lockstep scheduling has been raised as an issue before and closed: https://github.com/PX4/Firmware/issues/3279

priseborough commented 7 years ago

https://github.com/PX4/Firmware/issues/7408

priseborough commented 7 years ago

I wanted to clarify what you were trying to achieve before digging further. When you say you want to use vision Pose do you mean the quaternion angle from the vision system (Pose refers to angular states) or the position?

Are you aware that the EKF2:

a) Will only use the yaw component of the vision Pose (quaternion) data and only when bit position 4 in the bitmask is true. b) If bit position 3 is true it will attempt to use vision data if GPS data is not present. It cannot use vision and GPS Position data at the same time because they fight each other. If this use case is to be supported, then I will need to add a two state Bayes filter or equivalent to estimate the offset of the vision system origin relative to the selection logic to use vision system data by preference rather than GPS.

GPS and optical flow can be used together because the flow gives a velocity measurement, not an absolute position.

LorenzMeier commented 7 years ago

@priseborough The right way to formulate it for most vision systems is to fuse the vision pose as a low-drift position (so it is not absolute, but more a kinematically correctly integrated velocity). Focusing on that would be a good first step.

zalessio commented 7 years ago

When I am talking about pose I am referring to a 6dof position + quaternion. But in my case I am using just the position, I did not use the quaternion component of the pose. What I am doing right now (both with LPE and EKF2) is: when I initialise my vision system I save the offset between it and the GPS coordinate system. Then the measurement that I send to the EKF is my vision pose transformed in GPS coordinate frame.

priseborough commented 7 years ago

@zalessio Is this an on-board position measurement system that accumulates error or an off-board system with an absolute earth frame reference like a VICON?

If it is on-board, are there loop closure jumps (eg when a SLAM system observes an old feature) to deal with?

zalessio commented 7 years ago

It is an on-board system ... In this moment I'm trying to use two different vision pose estimators:

mhkabir commented 7 years ago

@priseborough To give a general high-level overview of the vision support we want to support a large percentage of use cases. Please let me know if anything is unclear or where you need more information :

General implementation ideas

Is this an on-board position measurement system that accumulates error or an off-board system with an absolute earth frame reference like a VICON?

  1. The estimates come from onboard visual odometry systems, which drift and are not absolute.
  2. VICON is a different use case which we would like to support as well, but it will need slightly different handling as it is drift free and an absolute measurement. e.g there is no need to co-estimate the transform between PX4 local frame origin and VICON origin. More on this below.

If it is on-board, are there loop closure jumps (eg when a SLAM system observes an old feature) to deal with?

For odometry-only systems, which will want to support for 90% of the cases, no. As far as the state of the art in visual state estimation is heading, very low-drift odometry-only systems are much more popular than a full SLAM system. The computational demand is much lower and they can be run on moderate ARM computers. Their long-term accuracy is also comparable to good graph-SLAM systems from a year or two back, thanks to techniques like using landmarks as "hard" anchor poses.

What I am doing right now (both with LPE and EKF2) is: when I initialise my vision system I save the offset between it and the GPS coordinate system. Then the measurement that I send to the EKF is my vision pose transformed in GPS coordinate frame.

From my experience, this is fine for short term use, but longer term, you need to estimate the offset between the vision frame and PX4's local frame because these drift slowly over time. This is especially important when flying in mixed environments where using a single GPS measurement (e.g when powering on indoors with visual odometry alone, and flying outdoors to a GPS-available environment) to compute the offset is not accurate. This needs to be iteratively estimated. The estimated offset should also be made available to offboard systems via MAVLink.

If this use case is to be supported, then I will need to add a two state Bayes filter or equivalent to estimate the offset of the vision system origin relative to the selection logic to use vision system data by preference rather than GPS.

This is exactly what we ended up doing after working on our VIO system for 1 year+, but it was implemented offboard.

What I'd love to see is robust logic which can :

  1. Handle the case where the vision system loses track (say, due to lack of features) and reinits itself (to 0,0,0) when it can track again. This will cause the vision-PX4 origin offset to completely change, but when this offset is co-estimated online, we can still get very accurate state estimation performance in mixed scenarios when at-least 1 sensing source still functions (flow, GPS or vision).
  2. Detect vision failure (high innovations) and can signal the same such that the vision system can re-init. No state-of-the-art vision system is perfect and this reset logic needs to exist for real-world implementations.

Will only use the yaw component of the vision Pose (quaternion) data and only when bit position 4 in the bitmask is true.

This is sufficient for basic flying, but I think we should eventually think about fusing the whole quaternion to aid in sensor bias estimation for the IMU. Topology wise, the vision system is best-placed to observe the sensor biases, vibration effects, etc.

The right way to formulate it for most vision systems is to fuse the vision pose as a low-drift position (so it is not absolute, but more a kinematically correctly integrated velocity).

+1

Available API

I implemented a nice API last year which gives the onboard estimator (EKF2 or LPE) access to a full state estimate from an offboard Visual-Inertial Odometry system like Rovio or DuoVIO : https://github.com/PX4/Firmware/pull/6074. See the pull request for more information.

  1. Position/Velocity/Acceleration are decoupled from Attitude and Attitude Rate estimates. (they use two different messages, but are perfectly timestamped and synchronised with the offboard clock, so the synced timestamps can be directly used by the EKF to apply the correction to the state at the correct time.)
  2. The API provides full access to vision covariances, so we can use those in conjunction with an EKF2 parameter defining the measurement noise floor.

Given this information, I think it should be easy to implement all of the bits we need for robust vision support.

priseborough commented 7 years ago

@mhkabir, thank you for your detailed response. I will have a look at the https://github.com/PX4/Firmware/pull/6074 and see what ideas I come up with for integration. On the EKF3 used by ardupilot which shares the same derivation with our ecl EKF, I fused visual odometry from a ZED camera by using the delta position estimates from the camera system rotated into body frame. I think I can improve on that by incorporating delta position data into the derivation rather than converting the frame to frame position changes to a velocity observation, however it will require some more storage because another copy of the covariance matrix will be required.

priseborough commented 7 years ago

I have derived and tested in the matlab model, an update to the ekf2 derivation that is able to directly fuse visual odometry body frame delta position data simultaneously with GPS. This involved adding 3 additional states for delta position. After the fusion of each delta position measurement, the delta position states and corresponding covariances are reset to zero.

https://github.com/PX4/ecl/tree/ekf_odometry-wip

This was tested using a reference data set collected on Ardupilot using a ZED camera + TX1 running OpenKai + PX4Flow + LidarLite + GPS. The new fusion method gives lower drift than the previous method that converted the delta position measurements to equivalent velocities, but does require an additional 3 states. However by placing these between the delta velocity bias and magnetic field states, the effective number of states is reduced to 18 for most applications using odometry, because magnetic field estimation and wind is not required for indoor operation.

Here are the delta position innovations:

zed_camera_delta_position_fusion

Optical flow innovations: optical_flow_fusion

GPS position innovations: position_fusion

The data was gathered starting indoors, moving outdoors. and back indoors again.

I will now apply a variation of the technique to add fusion of delta position data in earth frame which will enable simultaneous use of absolute position data fro external vision systems with GPS. The delta position will be obtained from the difference between consecutive position measurements.

mhkabir commented 7 years ago

@priseborough Looks amazing! In general, we should come up with a visual odometry simulation in Gazebo for PX4. It would accelerate testing greatly. The only thing which makes me hesitant is the Linux/ROS dependency that pulls in.

priseborough commented 7 years ago

A bit more work and after adding compensation for the body frame position of the camera and fixing a scaling bug on calculation of the observation variance from the sensor quality metric. This is the estimated trajectory and the GPS which was bad quality for the indoor part of the test.

trajectory

Delta position innovations have improved: zed_camera_delta_position_fusion

Turning off all aiding except the ZED camera gives an average position drift rate across the whole test of about 0.5 m/minute, more when moving, less when stationary.

I'm happy enough with the results now to do a prototype in the ecl C++ code.

priseborough commented 7 years ago

I've created a branch here that allows for simultaneous use of GPS and external vision NED position data, however to test it I need a log gathered with SLOG_MODE = 2 and EKF2_REC_RPL = 1 so I can test it using replay.

https://github.com/PX4/ecl/tree/pr-ekfExtVis

priseborough commented 7 years ago

I've started on the C++ code changes here: https://github.com/PX4/ecl/tree/ekf_odometry-wip

The additional states required to fuse body frame delta position data have been added so it is now at the point where I need to write the interface to get the visual odometry data into the EKF.

@mhkabir I've looked at the existing uORB topics and can't find an existing message for body frame delta position (or attitude) data. Before I create one, let's coordinate on requirements.

mhkabir commented 7 years ago

@priseborough There isn't one, correct - but why would we want one? Every single odometry/SLAM system out there gives a pose estimate in it's own global frame. Let's quickly sync on Slack.

Changliu52 commented 7 years ago

Great Job, Paul! Maybe using the vision_speed_estimate topic (section 4.8 in http://wiki.ros.org/mavros_extras) is a straightforward option? And do the integration internally in px4.

Chang

On 27 Jul 2017, at 09:41, Paul Riseborough notifications@github.com wrote:

I've started on the C++ code changes here: https://github.com/PX4/ecl/tree/ekf_odometry-wip https://github.com/PX4/ecl/tree/ekf_odometry-wip The additional states required to fuse body frame delta position data have been added so it is now at the point where I need to write the interface to get the visual odometry data into the EKF.

@mhkabir https://github.com/mhkabir I've looked at the existing uORB topics and can't find an existing message for body frame delta position (or attitude) data. Before I create one, let's coordinate on requirements.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/PX4/Firmware/issues/7408#issuecomment-318298156, or mute the thread https://github.com/notifications/unsubscribe-auth/AEdVxR3rM0Msv6Z6GK0X2iyH8Xp5ON8sks5sSE0ggaJpZM4N4erI.

mhkabir commented 7 years ago

No, using the vision velocity isn't a solution.

priseborough commented 7 years ago

The ZED camera solution I used on ArduPilot left the image processing for odometry operating on a frame to frame in sensor frame and left the conversion to navigation frame to the EKF. It avoids the issues associated with misalignment of the two frames.

This is a solution for odometry, not SLAM.

priseborough commented 7 years ago

The use of delta positions in body/sensor frame is demonstrated in the matlab replay results above.

priseborough commented 7 years ago

I've created a new branch https://github.com/PX4/ecl/tree/pr-ekfExtVisQuat for the changes required to accommodate angular misalignment of the external vision reference frame.

PX4BuildBot commented 6 years ago

Hey, this issue has been closed because the label status/STALE is set and there were no updates for 30 days. Feel free to reopen this issue if you deem it appropriate.

(This is an automated comment from GitMate.io.)

asimonov commented 6 years ago

what is the status of this? was GPS+Vision fusion fixed in any release since Jun 2017?

mhkabir commented 6 years ago

Yes it should be in v1.7

asimonov commented 6 years ago

yes, thank you. GPS+vision(external) fusion works in 1.8. but transitioning between states where both or either GPS/vision work does not seem to work seamlessly.

especially when one extra positioning system is [re]-gained.

is this supposed to work seamlessly yet in EKF2?

mhkabir commented 6 years ago

Yes, that is something we're working on right now, and it isn't perfect yet. Testing and suggestions would be helpful.

See this ecl branch for a small fix for vision reinitialization : https://github.com/PX4/ecl/tree/pr-extVisInit?files=1

asimonov commented 6 years ago

oh, great! i suppose I can help with testing

szebedy commented 6 years ago

Great job on this issue everyone.

@zalessio Quick question: How did you change the EKF2_AID_MASK while flying? The documentation says a reboot is required. Did you use the mavros/param/set command and it worked without reboot?

asimonov commented 6 years ago

We did not change the mask. We were transitioning through hybrid environment where we knew for sure either vision or gps would fail at certain points. The aim was to see if EKF would recover that particular sensor stream after it comes back


Kind Regards, Alexey Simonov

On 16 Jul 2018, at 13:08, szebedy notifications@github.com wrote:

Great job on this issue everyone.

@zalessio Quick question: How did you change the EKF2_AID_MASK while fliying? The documentation says a reboot is required. Did you use the mavros/param/set command and it worked without reboot?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

szebedy commented 6 years ago

Thanks @asimonov for the quick reply. I think @zalessio did change the mask on the fly, see the first comment in this thread:

If I disable the GPS on the EK2_AID_MASK then the state estimate follows the vision pose (as you can see in the first plot), but as soon as I re-enable the GPS then the vision pose is not trusted anymore (second plot). ... second -22 disable GPS in EK2_AID_MASK ... second -90 enable GPS in EK2_AID_MASK

NicoNelli commented 5 years ago

Hi everyone, I am using a drone equipped with PixHawk and an onboard computer. I am using LPE and I'd like to fuse GPS and vision data coming from an onboard vision system. My purpose is to use GPS data and, during complex maneuvers, I want to add vision data to improve the state estimation. May I change in flight the parameter LPE_FUSION to implement such behaviour?

Regarding the addition of the vision data, I find out two mavros topics:

1) /mavros/vision_pose/pose

2) /mavros/odometry/odom

I guess the first one regards absolute measurement and the second one regards vehicle's perspective, is not true ?

I wrote here because I saw the topic very similar to my questions. Thank you in advance

TSC21 commented 5 years ago

@NicoNelli please open a new issue, as the above is not related to your case. Also, we are not actively supporting LPE, so I advise you to change to EKF2.

NicoNelli commented 5 years ago

@TSC21 thank you very much. I will switch to EKF2.