Closed HalfSummer11 closed 4 years ago
@HalfSummer11 Hi, thank you for your interest! Indeed we were using the form of "pseudo-exponential map". The intuition is that in the standard exponential mapping form, the final translation will partly depend on the rotation twist prediction, which makes the translation/rotation output branch disentanglement less meaningful, though this has not been tried. On the other hand, under the sense of standard exponential mapping, this work can also be thought of as the translation exponential mapping process has been encoded into the neural network. So we are asking the network to directly output the translation rather than its twist.
Thanks for your answer and the explanation of the intuition! It's very helpful :)
Hi, it's me again :) I'm really interested in your work and I'm recently studying the se(3) representation used in the paper. I'm a bit confused because it seems to differ from the standard se(3) on the translation part.
In section III.A the exponential map from se(3) to SE(3) is given by Here the translation part of the rigid transformation is directly taken from the 6D vector [t, w] in se(3).
However, the exponential map I see elsewhere looks like this: Here the final translation is Vt. This is in accordance with Fig. 3 in the paper
I wonder if there is a typo in III.A or the 6D transformation is actually parameterized by the so(3) representation w for rotations and a plain translation t, which also makes sense, especially when the predictions of t and w are disentangled into two branches in the network. Thanks a lot!
Upd: After a bit more research I found that the exponential map in III.A is defined as "pseudo-exponential map" in Blanco10 and is said to yield Jacobians that are more efficient to evaluate. Is this what the paper intended to use?