NeRF-SLAM: Real-Time Dense Monocular SLAM with Neural Radiance Fields. https://arxiv.org/abs/2210.13641 + Sigma-Fusion: Probabilistic Volumetric Fusion for Dense Monocular SLAM https://arxiv.org/abs/2210.01276
Thank you for open sourcing the great work.
I am reading the code, and get baffled with the gtsam retraction here.
My (wrong) derivation indicates that the chi should be negative,
self.last_state = x0.retract(-gtsam_delta)
It follows from the below observations.
Here we are optimizing w_T_b, its update is defined as w_T_b \leftarrow w_T_b exp(epsilon).
In droid-slam, the ba optimizes c_T_w, its update is defined as c_T_w \leftarrow exp(nu) c_T_w.
By comparing nerfslam droid_kernels.cu and droid-slam droid_kernels.cu, I see that they are using the same observations (gru flow) and the same Hessians and residuals.
So following droid slam, denote the delta chi by nu, we will update w_T_b by w_T_b \leftarrow b_T_w exp(-nu).
Thank you for open sourcing the great work. I am reading the code, and get baffled with the gtsam retraction here. My (wrong) derivation indicates that the chi should be negative,
It follows from the below observations. Here we are optimizing w_T_b, its update is defined as w_T_b \leftarrow w_T_b exp(epsilon). In droid-slam, the ba optimizes c_T_w, its update is defined as c_T_w \leftarrow exp(nu) c_T_w. By comparing nerfslam droid_kernels.cu and droid-slam droid_kernels.cu, I see that they are using the same observations (gru flow) and the same Hessians and residuals. So following droid slam, denote the delta chi by nu, we will update w_T_b by w_T_b \leftarrow b_T_w exp(-nu).
Can you please clarify this?