Open xinzhe11 opened 11 months ago
Hi, we follow the same logic being used in Dynamic 3D Gaussians: https://github.com/JonathonLuiten/Dynamic3DGaussians/blob/f3bb1959fbd8880773c816eedfd9963cac43b06e/train.py#L135
I'll get back to you with more details.
Hi, I had further correspondence with the author of Dynamic 3D Gaussians. We will update our forward propagation logic.
As you mentioned, the correct way would be to multiply quaternions instead of interpolating.
But my experiment shows that interpolating is right. I don't understand why
Hey @Buffyqsf, Thanks for your experiments! I guess our current linear interpolation works most of the time. However, it can be unstable, resulting in non-unit quaternions. We haven't encountered that in our code because we normalize unit quaternions (hence, rotation propagation works well for Dynamic 3D Gaussians and SplaTAM).
Ideally, we should use Slerp for interpolation. I presume that something like this should work:
def slerp(q1, q2, t):
"""Spherical linear interpolation between two quaternions.
Args:
q1 (torch.Tensor): First quaternion (normalized).
q2 (torch.Tensor): Second quaternion (normalized).
t (float): Interpolation factor, between 0 and 1.
Returns:
torch.Tensor: Interpolated quaternion (normalized).
"""
assert q1.shape == q2.shape
assert 0 <= t <= 1
dot = torch.dot(q1, q2)
# If the dot product is negative, slerp won't take the shorter path.
# Note that q1 and q2 are already normalized.
if dot < 0.0:
q1 = -q1
dot = -dot
omega = torch.acos(dot)
sin_omega = torch.sin(omega)
if sin_omega < 1e-7:
return q1
# Interpolation weights
weight1 = torch.sin((1.0 - t) * omega) / sin_omega
weight2 = torch.sin(t * omega) / sin_omega
# Interpolated quaternion
return q1 * weight1 + q2 * weight2
The other option is to use rotation matrices similar to Nice-SLAM & Point-SLAM: https://github.com/eriksandstroem/Point-SLAM/blob/6943bef84ce65883673ab801a4613aee9907ecc6/src/Tracker.py#L285
Hi, great work! Question about initialize camera pose, the minus or plus operation of quaternions can not get the relative rotation between poses, use multiply of quaternion or rotation matrix is a proper way?
In function of initialize_camera_pose, line 420 if splatam.py
prev_rot1 = F.normalize(params['cam_unnorm_rots'][..., curr_time_idx-1].detach()) prev_rot2 = F.normalize(params['cam_unnorm_rots'][..., curr_time_idx-2].detach()) new_rot = F.normalize(prev_rot1 + (prev_rot1 - prev_rot2)) params['cam_unnorm_rots'][..., curr_time_idx] = new_rot.detach()