Closed mnissov closed 1 year ago
Hello @mnissov Morten
Regarding our Jacobian, it is correct according to the definition of the right Jacobian. The proof is through the chain rule and all formulas in the paper, and is supported by extensive unit testing in manif (which tests ALL jacobians for exactness using the small-perturbation approximations similar to those you use above):
D(X.inv * v) / DX =
= D(X.inv * v) / D(X.inv) * D(X.inv)/DX
= -X.tr * v_x * (-Ad_X)
= X.tr * v_x * X
= [X.tr * v]_x
This Jacobian needs to be interpreted as follows: when X
is perturbed locally with tau, the action of X.tr*v
gets also perturbed. The Jacobian is the limit of the quotient of perturbations, when tau
goes to zero. The important point here is that tau
is defined in the tangent space local to X
.
The second Jacobian that you present is probably different, although you do not provide details on how \psi
participates in C
. If you happen to use psi=log(C1 * C2.inv)
then \psi
is a vector tangent to SO(3) at the identity, and not local to C
. In such case, you get what we call the left Jacobian. I reckon this is the reason you observe this difference.
If this is the case, then both Jacobians are different.
Regarding your test, you should test the first one using right-plus and regular minus
e = ((X (+) tau).inv * v) - (X.inv * v + J_1*tau) (1)
and the second one using left-plus and regular minus
e = ((tau (+) X).inv * v - (X.inv * v + J_2*tau) (2)
Since you are only evaluating with (1), you should find that our Jacobian performs well, and the other one does not. However, if you use random X, it may happen that in some occasions X is close to the identity, in which case both Jacobians will be practically the same. Then, it may happen that the second Jacobian performs better than the first, just by some random effect. The first Jacobian should however perform well in all cases using test (1).
Does this make sense?
Regarding your test, you should test the first one using right-plus and regular minus and the second one using left-plus and regular minus
I realize now I wasn't consistent between text and code, in that I introduce the Lie theory derived Jacobian first but assign it to the function jacobian2
. As a result, just to be sure, when you mention "first one" and "second one" here you're referring to
The second Jacobian that you present is probably different, although you do not provide details on how \psi participates in C.
I went back to the book to find this and I think you may be right. Groves defines the attitude error as
$$ \begin{aligned} \delta C\beta^\alpha &= \hat{C}\beta^\alpha C_\alpha^\beta\ &= I3 + \left[ \delta \psi{\alpha\beta}^{\alpha}\times \right]\ \end{aligned} $$
for the error $\delta C{\beta}^{\alpha}$, estimate $\hat{C}{\beta}^{\alpha}$, and true value $C$. Rearranging this for the perturbed estimate results in something more familiar:
$$ \hat{C}_{\beta}^{\alpha} = (I3 + \left[ \delta\psi{\alpha\beta}^{\alpha}\times \right]) C_{\beta}^{\alpha} $$
so you're right, this corresponds to a global perturbation rather than a local I suppose. The jacobian should then be
$$ \begin{aligned} J &= \lim{\tau\rightarrow\infty} \frac{(\tau\oplus R)^{-1} \cdot v - R^{-1}\cdot v}{\tau}\ &= \lim{\tau\rightarrow\infty} \frac{(R^\top e^{-\tau^\wedge}) \cdot v - R^{\top}\cdot v}{\tau}\ &= \lim_{\tau\rightarrow\infty} \frac{R^\top ( I_3 - \tau^\wedge ) \cdot v - R^{-1}\cdot v}{\tau}\ &= R^\top v^\wedge \end{aligned} $$
In hindsight, I think I made a typo in transcribing the Jacobian, his error function was written $e = meas - h(x)$ so I think that's where the minus comes from. This would be convenient because then the plots make a lot of sense I think.
Note I tweaked the plot a bit to run N simulations of L length. Otherwise the same:
I realize now I wasn't consistent between text and code, in that I introduce the Lie theory derived Jacobian first but assign it to the function
jacobian2
. As a result, just to be sure, when you mention "first one" and "second one" here you're referring to* "first one": jacobian derived via Lie theory, i.e. first equation in above text: (X−1⋅v)∧ * "second one": jacobian inspired by Groves textbooks, i.e. second equation in above text: −X−1⋅v∧
Correct, first one is Lie, second one is Groves
So manif uses right-Jacobians, therefore local perturbations, and Groves uses left-Jacobians, therefore global perturbations.
It seems then it all fits perfectly!
yes! thanks so much for the help
Maybe this is a little out of scope for this platform, if yes I understand.
The basic problem is in understanding what is more correct between various derivations of what amounts to the inverse action of SO(3).
Looking at the paper and cheatsheet one would conclude that
$$ J_{\mathcal{X}}^{\mathcal{X}^{-1} \cdot v} = (\mathcal{X}^{-1}\cdot v)^{\wedge} $$
for $\mathcal{X}\in SO(3)$ and $v\in \mathcal{R}^{3}$.
However, this is not always the results which is used/found by other sources with a similar equation. Looking at an alternative source, something like this GNSS/INS textbook from Paul Groves, in chapter 16 he discusses doppler aided INS systems, which will inevitably have a similar inverse action to consider. He derives the Jacobian of the measurement function, in equation 16.69, to be
$$ \frac{\partial C{w}^{b} v^{w}}{\partial \delta \psi{b}^{w}} = -C_{w}^{b} (v^{w})^{\wedge} $$
Note I've used his notation here and simplified the equation a bit. But here $C{w}^{b}$ is the rotation from {w}->{b}, $v^{w}$ is the {w}-frame velocity, and $\delta \psi{b}^{w}$ is the error in the orientation of {b} in {w}, since this is an error state formulation. This is the inverse action because the rotation which directly corresponds to $\delta \psi{b}^{w}$ should be $C{b}^{w}$, and we're using it's transpose here.
I tried also to quantify the difference between these two numerically, using a python script to perturb a rotation and calculate the error by
$$ e = \lVert \underbrace{\left( \mathcal{X} \oplus \tau \right)^{-1}\cdot v}{\text{true}} - \underbrace{ \left( v + J \tau \right) }{\text{approximate}} \rVert_2 $$
where $\mathcal{X}\in SO(3)$ and $v, \tau \in \mathcal{R}^3$ are random and $J$ is each of the two before-mentioned Jacobians. Note, I scale the perturbation with factor $k\in [0, 1)$ to see the error of this first order approximation grow.
What is strange is quite often the Lie theory derived Jacobian will perform much better, and then sometimes not depending on the specific simulation. This behavior I don't quite understand.
Code for the python analysis