facebookresearch / theseus

A library for differentiable nonlinear optimization
MIT License
1.68k stars 120 forks source link

Support differentiable forward kinematics with `n-dof` joints #473

Open fantaosha opened 1 year ago

fantaosha commented 1 year ago

🚀 Feature

Motivation

Pitch

Alternatives

Additional context

juulie commented 10 months ago

Hi is there any update on this? Ive been trying to use this to solve an IK problem for a human kinematic chain with lots of 2-dof and 3-dof joints (total is aroud 125). Ive got it to work using autodiff but its painfully slow, uses a lot of memory, and gradient doesnt flow back to the outer optimization loop. Since you've also made the analytical backwards propagation I was wondering if youre looking to add that to more than 1-dof as well?

mhmukadam commented 10 months ago

@juulie thanks for your interest! We currently haven't scoped to work on these features. However, @fantaosha may be able to give some pointers if you wanted to give it a try yourself and can share relevant parts of your current implementation -- we welcome community contributions!

juulie commented 10 months ago

Awesome! To elaborate on my problem: My neural model works on animated 3d markers that act as constraints on a human kinematic model. I want to use the Levenberg Marquardt optimizer to optimize the following error function:

def targeted_pose_error(optim_vars, aux_vars):
    (dof_input,) = optim_vars
    (pre_rotation, pre_translation, marker_config, marker_weight, *target_markers) = aux_vars
    global_transforms = fk_dof(dof_input.tensor, pre_rotation.tensor, pre_translation.tensor) # Returns global transform of each joint

    errors = []
    for m_i, marker_driving_joints in enumerate(markers_driving_joints):
         for j_i in marker_driving_joints:
            marker_constraint_pos = global_transforms[:, j_i].transform_from(marker_config[:, m_i, j_i])
            errors.append(marker_constraint_pos.between(target_markers[m_i]).tensor * marker_weight[:, m_i, j_i])
    return torch.stack(errors, dim=1).flatten(1)

fk_dof takes the dof_input tensor and uses a mapping to map each single dof_input entry to its correct local transformation and composes them. It then goes down the kinematic chain: For each joint it takes its parent global_transforms, applies the pre_translation and pre_rotation, and then the local SE3 transforms, in the right order.

I feel like a custom backwards function could significantly improve the performance, especially since my human model has 123 dof. Its just that my knowledge on Lie algebra is lacking.

Some more notes: -Even though the inner optimization loop works (dof_input converges nicely) the gradients do not flow out to the list of target_markers, do you have any pointers on how to debug that? -Right now im only trying to solve for a batch of singular poses, hopefully there is some way to use this over tensor that has both a batch as well as time series dimension.