isaac-sim / IsaacLab

Unified framework for robot learning built on NVIDIA Isaac Sim
https://isaac-sim.github.io/IsaacLab
Other
2.32k stars 960 forks source link

[Question] I want to know the difference between quat_apply and quat_rotate? Which one should I choose? #1434

Open crowznl opened 1 week ago

crowznl commented 1 week ago

~/IsaacLab/source/extensions/omni.isaac.lab/omni/isaac/lab/utils/math.py there are two functions for rotate a vector by a quaternion:

@torch.jit.script
def quat_apply(quat: torch.Tensor, vec: torch.Tensor) -> torch.Tensor:
    """Apply a quaternion rotation to a vector.

    Args:
        quat: The quaternion in (w, x, y, z). Shape is (..., 4).
        vec: The vector in (x, y, z). Shape is (..., 3).

    Returns:
        The rotated vector in (x, y, z). Shape is (..., 3).
    """
    # store shape
    shape = vec.shape
    # reshape to (N, 3) for multiplication
    quat = quat.reshape(-1, 4)
    vec = vec.reshape(-1, 3)
    # extract components from quaternions
    xyz = quat[:, 1:]
    t = xyz.cross(vec, dim=-1) * 2
    return (vec + quat[:, 0:1] * t + xyz.cross(t, dim=-1)).view(shape)
@torch.jit.script
def quat_rotate(q: torch.Tensor, v: torch.Tensor) -> torch.Tensor:
    """Rotate a vector by a quaternion along the last dimension of q and v.

    Args:
        q: The quaternion in (w, x, y, z). Shape is (..., 4).
        v: The vector in (x, y, z). Shape is (..., 3).

    Returns:
        The rotated vector in (x, y, z). Shape is (..., 3).
    """
    q_w = q[..., 0]
    q_vec = q[..., 1:]
    a = v * (2.0 * q_w**2 - 1.0).unsqueeze(-1)
    b = torch.cross(q_vec, v, dim=-1) * q_w.unsqueeze(-1) * 2.0
    # for two-dimensional tensors, bmm is faster than einsum
    if q_vec.dim() == 2:
        c = q_vec * torch.bmm(q_vec.view(q.shape[0], 1, 3), v.view(q.shape[0], 3, 1)).squeeze(-1) * 2.0
    else:
        c = q_vec * torch.einsum("...i,...i->...", q_vec, v).unsqueeze(-1) * 2.0
    return a + b + c

I know they are both simplified forms of $qvq^{-1}$ wiki

that $qvq^{-1} = [0, 2(\mathbf{u} \cdot \mathbf{v})\mathbf{u} + (2w^2 - 1)\mathbf{v} + 2w(\mathbf{u} \times \mathbf{v})] = [0, \mathbf{v} + 2w(\mathbf{u} \times \mathbf{v}) + \mathbf{u} \times (2\mathbf{u} \times \mathbf{v})]$

So which one is more efficient or more precise and is more recommended?

RandomOakForest commented 1 week ago

This depends on the current optimizations implemented by Pytorch for functions such as cross and einsum, as well as their dependencies in your system. Although we prefer to use quat_rotate, we rather recommend that you benchmark these functions with vectors and quaternion objects relevant to your task, so you can choose accordingly.