Closed mikehamer closed 3 years ago
@mgreiff, perhaps you also have some ideas?
It is a difficult assess just how much the above mentioned issues affect performance, especially since my access to performance MOCCAP systems rather very limited...! I think the problem is inherent to and almost unavoidable in the scalar update algorithm, as it is known to potentially diverge when including many scalar updates, and I have never seen any good remedy to it. Perhaps one idea could be to scale covariance matrix in relation to it's trace in between the measurement updates, similar to the constant trace RLS [1]?
Another alternative would be to consider an EKF matrix update instead, to avoid change in state/covariance in between the scalar updates. However, I'm not sure if it would computationally feasible to do so - but it need not be too difficult to test. I could try to hack together the matrix update and test it this week.
@mikehamer @ataffanel I've implemented a matrix update in the Kalman filter as opposed to the scalar update, and to me it seems like it might just be feasible (at least from a computational standpoint) and it avoids the issues inherent to the scalar update discussed above.
When running with just the IMU and ranging measurements in the z-direction, i get reasonable results when using the matrix update, but I have not yet tried it with TOA/TDOA. On line 236, the rate and maximum number of measurement equation entries can be set. So far, I've done a series of tests using the materials at hand - two notable include
laser ranging MEASUREMENT_DIM = 1 with 50 Hz, which gives good estimates in the Z-state while the XY-states naturally drift, showing that the implementation likely is correct.
computational feasibility MEASUREMENT_DIM = 6 with 20 Hz, which runs for a little while but diverges quickly and fails in the matrix inversion assertion on line 298 and not due to exceeding the designated time slot.
The implementation is a bit hacky (see line 478 and line 785), and I think and hope that everything is entered correctly. However, I am limited in testing the implementation as I don't have a good MOCCAP in which the scalar and matrix updates can be compared. An interesting test would be to run them both at their respective maximum rates and check RMS error compared to ground truth.
What do you think? Feel free to clone and edit the code as you please.
It would be nice to implement the matrix update in the master branch (maybe as optional for now?) and test it with the LPS and the VL53 laser ranging. @mgreiff did you make any measurement on how long time the matrix update really takes? Maybe this is the next step to understand how we can merge it?
I agree that a matrix update might be nice. We should profile the whole control/estimation loop (including newer, more complex controllers) and determine if there is enough time for this addition.
The profiling needs comes more and more lately, should we think about having a standard, easy to use, profiling framework? I guest what is mostly needed is runtime. We could have a generic way to calculate mean and stddev of the runtime of a piece of code.
I have added a button in the console tab in the client to dump information about the running tasks (bitcraze/crazyflie-clients-python#278). That could maybe be useful?
A profiling framework would be useful :+1:
I created a separate issue for profiling, #214
In our old Crazyswarm fork, we implemented a full matrix update for a (position, velocity, attitude) measurement from Vicon. It worked, but there's not much CPU headroom left. (We used a JTAG debugger and instrumented the code using usec_time
to measure CPU utilization.) However, the code is missing some easy optimizations, and we didn't use the ARM matrix library.
In our implementation, computing the dynamics matrix and updating the process covariance matrix takes roughly the same amount of time as the correction step.
what is the status here @ataffanel @knmcguire ?
@estromb picked up on a subtle issue in the Kalman filter which affects the processing of queued measurements. I quote his email to me:
His point is a good one, but not only in the case of outliers and also affects the covariance, not only the state.
One issue is that the measurement function (the matrix
h
that is filled out in the stateEstimatorUpdateWith... functions) is not necessarily constant and could depend on the state, so if state is updated between scalar updates, it will couple measurements and make the final estimate order dependent.Furthermore, every time a measurement is applied, the covariance decreases (and therefore the Kalman gain of subsequent measurements), meaning that older measurements have more effect on the state update than newer measurements (since measurements are processed FIFO from the queue after the covariance was increased in the prediction step).
The question is how much influence this actually has (probably something we should profile). This most likely depends largely on how many measurements are enqueued. A quick review of some books on state estimation note that the sequential Kalman filter formulation (using scalar updates to reduce computation) is not so great in the case of an Extended Kalman Filter (due exactly to the aforementioned reasons), but that it is usually good enough.
What are your thoughts, @ataffanel, @estromb, @stephanbro?