NorthStarUAS / insgnss_tools

INS/GNS, EKF, sensor fusion toolbox with python wrappers.
MIT License
21 stars 12 forks source link

Sampling frequency and recommended sensors #1

Open alrevuelta opened 2 months ago

alrevuelta commented 2 months ago

I'm using the nav_ekf15 filter for general aviation and was wondering:

Thanks for sharing!

clolsonus commented 2 months ago

Hi Alvaro, I saw your question, but I'm not in a place where I can spend time responding today. Hopefully in the next day or two I can send a better reply.

clolsonus commented 2 months ago

Hi Alvaro, I'm happy to discuss more details and help in any way (time permitting) to get you up to speed with this code. I usually run the code with 5 (or 10hz) gps and 50-100hz IMU. Previously I have run it from a simulator with a pretty jittery sample rates by the time the sensor data got to the FMU hardware ... and that behaved pretty well. I have never run it with 10hz IMU data and 1hz gps though. The math should all work the same, but it would affect errors and confidence of your solution.

Most of my previous work with this code has been in the context of UAS flight controllers and sensors. So the ublox8 (or 9) has been wonderful and easily spits out 10hz solutions. For the IMU I have run with MPU6000, MPU9250, and ICM20498 (#?) with good success. I have also run with simulated data, and in the early days ran with much more primitive sensors. It's more important to get units and axes/orientation correct than the specific sensor. I have a version of the code that was setup to build inside the ardupilot development environment ... but since then I have focused more on a stock arduino build environment targetting the impressive teensy4.0 (4.1) microcontroller board. I have also in the past wrapped the C++ code for python to make the algorithm available for python scripts. At one point we had a pure python/numpy port of the algorithm floating around, but it ran pretty slow so I focused on the C++ compiled version myself.

And yes, because this code is gps corrected, as long as there is sufficient change in the velocity vector (aka acceleration) then it will converge nicely to the true orientation angles. Flying endless circles can actually help the convergence. You have to be a little careful though ... there needs to be enough acceleration (change in the gps velocity vector) for the solution to converge and stay converged. I have had issues running the system in a car for instance on a long straight stretch of road. The yaw estimate can start to drift off truth.

I have experimented with a version that includes magnetometer data in the correction step, but this is a double edge sword. If your mags aren't perfectly calibrated (and they never are) then they will always be pulling your solution away from truth ... but if they are pretty well calibrated they can keep you from drifting too much.

There are other ekf codes out there (like those used by ardupilot and px4) that are fairly sophisticated, but (and this is just a personal opinion) i feel like they have gotten quite complicated and use-case specific and include a number of conditionals and extra stuff that to me, can just get in the way and cause trouble. So for my own work I've stuck with this simpler system.

I am not the original author/designer of this algorithm. That is Prof Demoz Gebre from the U of MN (Twin Cities). Periodically I've gone through and done code level cleanups. We (as in not just me) ported the algorithm from matlab to C to run in real time. Then we ported to C++ to take advantage of the Eigen matrix library. And along the way we've also done additional peep-hole level optimizations and cleanups and renaming ... but nothing that change the fundamental design of the algorithm.

Hopefully that wasn't too much of a brain dump for a single message. My U of MN funding went kaput a few years ago, so now I am working full time for Cirrus Aircraft in Duluth, MN. That leaves me a lot less time to work on these side interests, but I've been trying to freshen up the project code in places this summer with [moderate] hopes to get an airplane back up and flying by the end of 2024. I'm sure I could talk your ear off on this stuff for hours, so feel free to ask follow up questions and I'm happy to type more stuff. :-)

alrevuelta commented 2 months ago

Thanks for such a detailed reply @clolsonus! That really clarified many things.

You have to be a little careful though ... there needs to be enough acceleration (change in the gps velocity vector) for the solution to converge and stay converged. I have had issues running the system in a car for instance on a long straight stretch of road. The yaw estimate can start to drift off truth.

I am not the original author/designer of this algorithm. That is Prof Demoz Gebre from the U of MN (Twin Cities)

Out of curiosity, how different is this one from uNavINS. I can see this other one uses a magnetometer, but the states its estimating seems to be the same, and the approach looks similar.

There are other ekf codes out there (like those used by ardupilot and px4)

I had a look into these codebases, but couldn't find an easy way to hoist the sensor fusion code out. All logic seemed too convoluted to extract this isolated part. Do you have by any chance any experience with this?

ccing Andrew @AndrewForber and David @datenen, who are also interested in this.

clolsonus commented 2 months ago

Thanks for such a detailed reply @clolsonus! That really clarified many things.

* If it can happen in a car, guess it will also happen in an aircraft flying straight and leveled?

First, I am not a world expert in this stuff! I have used this code a lot over the years so I have some familiarity with it and how it behaves in real world scenarios so I am just sharing what I know (or think I know.) :-)

So yes, potentially on a long straight segment in an airplane you could experience drift. This was never an issue with my UAS work because we were always turning back to stay in the area and within line of sight. A long straight flight in smooth air could probably start to see issues, especially in yaw.

* What does "long" mean? Are we talking about minutes? Hours?

This can depend a lot on sensor quality, but even with the best mems sensors drift is a pretty big issue (less so with gyros, but accels tend to be really noisy in comparison.) It also depends on how well the filter has converged before you enter a long straight stretch. A poorly converged filter could lead to rapid drift. Either way, you might start to see some yaw error show up on the order of minutes and it could take a similar amount of maneuvering to get the filter to reconverge.

The core issue is that in a steady state we can observe roll and pitch (think tilt sensor) but there is no way to observe yaw. We can only observe yaw over multiple samples and by virtually picking the best yaw value that makes all the sensor data line up over time. But even so, in a steady state (i.e. moving in a straight line at constant velocity) yaw is still not observable over time. It only becomes observable when the velocity vector changes direction or magnitude. And then we need sufficient change in the gps velocity vector to distinguish from the sensing noise.

I have not studied kalman filters formally so I could be wildly wrong on this, but I think of a kalman filter (EKF) as a real time optimizer ... it produces much the same result (and is subject to the same sensor errors) as if you collected a segment of data and integrated the gyros/accels forward ... and then used an optimizer to pick the best starting roll, pitch, yaw values to make the integrated/estimated solution line up the best with the gps vel/pos.

* You mention the yaw drifted, but this can also happen for pitch and roll if straight and leveled?

Experimentally: in steady state motion, roll and pitch are observable, but yaw is not. However, if you fly straight and level for a long time and get a big yaw drift, then when you turn, the kalman filter tries it's best to fit what it is estimating to the gps and the resulting errors could be distributed across all the estimated variable (like an optimizer might put the errors into different buckets based on subtle input differences.) So you could then see some pitch/roll errors when you do manuever after your error builds up in a long straight stretch.

Out of curiosity, how different is this one from uNavINS. I can see this other one uses a magnetometer, but the states its estimating seems to be the same, and the approach looks similar.

Yes, I'm not familiar with FlyTheThings on github, but it appears to be a variation derived from the same root code. I used to work with Brian Taylor at the University of Minnesota UAV lab before he spun off his own company (Bolder Flight Systems.)

And if you are looking at doing any of this commercially, it might be very productive for you to go talk to him directly.

I had a look into these codebases, but couldn't find an easy way to hoist the sensor fusion code out. All logic seemed too convoluted to extract this isolated part. Do you have by any chance any experience with this?

That is my sense too ... those codes have gotten pretty complicated with all kinds of built in modes and special case handling and personally I wouldn't even try to understand them unless I was backed into a corner and had no other option. That said, when I've compared our EKF algorithm against theirs, they end up in the same ball park in terms of accuracy and errors. They integrate the magnetometer so you'll see the horizon slowly drift off by a few degrees after you land.