Closed narutojxl closed 2 years ago
Hi @narutojxl ,
Thanks for your help @koide3!
Could you please give some advice how you calculate the jacobian, thanks for your help very much!
Calculating the jacobian of (B + RAR^T)^-1 is very complicated. I did it (you can find the code at the following links), but it was very slow and impractical.
In practice, we approximate RAR^T as a constant matrix during each optimization iteration step. Then, dr/dR can be simply given by (B + RAR^T)^-1 * dRa/dR. This approximation doesn't affect the accuracy while making the derivatives simple and fast.
https://github.com/SMRT-AIST/fast_gicp/blob/87cd6288d14bd155e8b7a2144f68bb5246aecc52/include/fast_gicp/gicp/gicp_loss.hpp https://github.com/SMRT-AIST/fast_gicp/blob/87cd6288d14bd155e8b7a2144f68bb5246aecc52/include/fast_gicp/gicp/gicp_derivatives.hpp
Thanks very very much@koide3 :)
BTW, $dRa/dR$ should be skew(Ra)
according to left perturbation formula?
I see in the code it is skew(Ra + t)
.
Js[count].block<3, 3>(0, 0) = RCR_inv.block<3, 3>(0, 0) * skew(transed_mean_A.head<3>());
It's a trick to calculate the jacobian of expmap. While the jacobian of expmap around r=0
is simply given by the skew symmetric function, the jacobian at an arbitrary point is not easy to obtain. To avoid complicated calculation, we calculate the jacobian at r=0
with the transformed point (p = Ra + t) instead of calculating the jacobian at r=R
with the original point p.
Hi @narutojxl ,
- In this work, we used 3D (XYZ) residuals that result in the same objective function as the scalar one.
- In the paper, $C^*$ are 3x3 covariance matrices, and thus there should be inverse. In the code, we used expanded 4x4 matrices to take advantage of SSE optimization, and we filled the right bottom corner with 1 before taking inverse to obtain a reasonable result.
Question about the covariance. In the linearization process, why directly multiply the M^{-1} * d_i as the residual function can work?From my point of view, I think maybe we should do the LLDT to the M^{-1} matrix and then build the update function?
Hi, doctor @koide3 I have a problem about the objective function, why the log term can be ignored as shown in the red box which also includes the optimized variable $\mathbf{T}$? Cloud please you give me some advice if you have time, thanks very much!
As explained at https://github.com/SMRT-AIST/fast_gicp/issues/20#issuecomment-664198521, we fix the fused covariance matrix at the linearization point. This approximation makes the log term constant and negligible during optimization.
Got it https://github.com/SMRT-AIST/fast_gicp/issues/20#issuecomment-1891509441, Thanks for your reply!
Hi doctor @koide3, I want to figure out the jacobians of residual in code.
RCR_inv * d
209 line of fast_gicp_st_impl.hpp, which is 3 dimensional, notd.transpose() * RCR_inv * d
, which is a scalar matching with the paper's cost function, why the residual is this?Thanks for your help! Jiao