koide3 / fast_gicp

A collection of GICP-based fast point cloud registration algorithms
BSD 3-Clause "New" or "Revised" License
1.29k stars 319 forks source link

residual format #20

Closed narutojxl closed 2 years ago

narutojxl commented 4 years ago

Hi doctor @koide3, I want to figure out the jacobians of residual in code.

Thanks for your help! Jiao

koide3 commented 4 years ago

Hi @narutojxl ,

https://github.com/SMRT-AIST/fast_gicp/blob/c5fa2a103d6c345d4fbd24f31f7ec4a2f20115af/include/fast_gicp/gicp/impl/fast_gicp_st_impl.hpp#L206

narutojxl commented 4 years ago

Thanks for your help @koide3! 2020-07-27 15-07-07屏幕截图

Could you please give some advice how you calculate the jacobian, thanks for your help very much!

koide3 commented 4 years ago

Calculating the jacobian of (B + RAR^T)^-1 is very complicated. I did it (you can find the code at the following links), but it was very slow and impractical.

In practice, we approximate RAR^T as a constant matrix during each optimization iteration step. Then, dr/dR can be simply given by (B + RAR^T)^-1 * dRa/dR. This approximation doesn't affect the accuracy while making the derivatives simple and fast.

https://github.com/SMRT-AIST/fast_gicp/blob/87cd6288d14bd155e8b7a2144f68bb5246aecc52/include/fast_gicp/gicp/gicp_loss.hpp https://github.com/SMRT-AIST/fast_gicp/blob/87cd6288d14bd155e8b7a2144f68bb5246aecc52/include/fast_gicp/gicp/gicp_derivatives.hpp

narutojxl commented 4 years ago

Thanks very very much@koide3 :) BTW, $dRa/dR$ should be skew(Ra) according to left perturbation formula?
I see in the code it is skew(Ra + t). Js[count].block<3, 3>(0, 0) = RCR_inv.block<3, 3>(0, 0) * skew(transed_mean_A.head<3>());

koide3 commented 4 years ago

It's a trick to calculate the jacobian of expmap. While the jacobian of expmap around r=0 is simply given by the skew symmetric function, the jacobian at an arbitrary point is not easy to obtain. To avoid complicated calculation, we calculate the jacobian at r=0 with the transformed point (p = Ra + t) instead of calculating the jacobian at r=R with the original point p.

narutojxl commented 4 years ago

I refer to this book 4.3.4 Perturbation Model image

plusk01 commented 2 years ago

@narutojxl, see Section 3.3.5 (the subsubsection after what you referenced) of the same book or eq (94) of Eade

image

Gatsby23 commented 1 year ago

Hi @narutojxl ,

  • In this work, we used 3D (XYZ) residuals that result in the same objective function as the scalar one.
  • In the paper, $C^*$ are 3x3 covariance matrices, and thus there should be inverse. In the code, we used expanded 4x4 matrices to take advantage of SSE optimization, and we filled the right bottom corner with 1 before taking inverse to obtain a reasonable result.

https://github.com/SMRT-AIST/fast_gicp/blob/c5fa2a103d6c345d4fbd24f31f7ec4a2f20115af/include/fast_gicp/gicp/impl/fast_gicp_st_impl.hpp#L206

Question about the covariance. In the linearization process, why directly multiply the M^{-1} * d_i as the residual function can work?From my point of view, I think maybe we should do the LLDT to the M^{-1} matrix and then build the update function?

YZH-bot commented 10 months ago

Hi, doctor @koide3 I have a problem about the objective function, why the log term can be ignored as shown in the red box which also includes the optimized variable $\mathbf{T}$? Cloud please you give me some advice if you have time, thanks very much! image

koide3 commented 10 months ago

As explained at https://github.com/SMRT-AIST/fast_gicp/issues/20#issuecomment-664198521, we fix the fused covariance matrix at the linearization point. This approximation makes the log term constant and negligible during optimization.

YZH-bot commented 10 months ago

Got it https://github.com/SMRT-AIST/fast_gicp/issues/20#issuecomment-1891509441, Thanks for your reply!