RussTedrake / underactuated

The course text for MIT 6.832 (and 6.832x on edX)
Other
753 stars 215 forks source link

Questions about jacobian in littledog.ipynb #525

Closed matheecs closed 1 year ago

matheecs commented 1 year ago

Hi @RussTedrake, the littledog.ipynb is a great demo about trajectory optimization. But I have a question about the calculation of jacobian Jq_WF:

# Hdot = sum_i cross(p_FootiW-com, contact_force_i)
def angular_momentum_constraint(vars, context_index):
    q, com, Hdot, contact_force = np.split(vars, [nq, nq + 3, nq + 6])
    ......
    if isinstance(vars[0], AutoDiffXd):
            ......
            ad_p_WF = InitializeAutoDiff(
                p_WF, np.hstack((Jq_WF, np.zeros((3, 18))))
            )
            torque = torque + np.cross(
                ad_p_WF.reshape(3) - com, contact_force[:, i]
            )
    else:
            ......

Does it need to be change to Jq_WF @ ExtractGradient(q)?

# Hdot = sum_i cross(p_FootiW-com, contact_force_i)
def angular_momentum_constraint(vars, context_index):
    q, com, Hdot, contact_force = np.split(vars, [nq, nq + 3, nq + 6])
    ......
    if isinstance(vars[0], AutoDiffXd):
            ......
            ad_p_WF = InitializeAutoDiff(
                p_WF, np.hstack((Jq_WF @ ExtractGradient(q), np.zeros((3, 18))))
            )
            torque = torque + np.cross(
                ad_p_WF.reshape(3) - com, contact_force[:, i]
            )
    else:
            ......

Thanks

RussTedrake commented 1 year ago

You're absolutely right that I should be using the gradients passed in. Thanks!

It happens that our current snopt solver will only ever call this method with the gradients in vars trivially initialized (so ExtractGradient(q) returns the identity). I'll PR the fixed version based on your idea.