Open bamfpga opened 4 years ago
Cholesky decomposition is faster than inverting a matrix using Gauss-Jordan elimination in most cases. There are several ways to decompose a matrix as well as compute the inverse. Generally, computing the inverse is slow.
From what I can see, the covariance update assumes the identity matrix is negligible in the Joseph form. This factors the equation down to: P = KHPH^TK^T + KRK^T = K(HPH^T + R)K^T = KSK^T
This actually has a name. It took awhile to find in literature, but is commonly referred to as Symmetric form.
Thanks for the response!
I have found this one helpful for resolving this query.
I have found this one helpful for resolving this query.
Good find. This is a better explanation of what is going on. I'll add that this proof is only valid once we assume the covariance matrix P is symmetric. Otherwise (KHP^T)^T is not equal to KHP. Hence the name Symmetric Form.
Hi, Very nice implementation. I've been analysing the code for the Kalman filter linear algebra equations, I understood all of them, except for the last one, the covariance update, which does not seem to match any of the forms presented at https://en.wikipedia.org/wiki/Kalman_filter, neither classic or Joseph form. How was this equation derived?
Another question? Is solving a linear sistem using Cholesky decomposition cheaper than performing a tradition matrix inversion using for example Gauss-Jordan method. Thanks in advance!