NVlabs / handover-sim2real

Official code for CVPR'23 paper: Learning Human-to-Robot Handovers from Point Clouds
https://handover-sim2real.github.io
Other
70 stars 13 forks source link

How dose the MANO-based hand model pick up the object? #7

Closed FredTrumpSenior closed 5 months ago

FredTrumpSenior commented 5 months ago

Hello, thanks for your great work!

The handover demonstration is amazing, and I'm also curious how the objects are picked up from the table by the MANO-based hand model, as a universal grasping method of the human hand is even harder than that of a parallel gripper in my understanding.

Can you explain, thank.

christsa commented 5 months ago

Hi,

Thanks for your interest in our work.

The generation of the human hand-object motion is explained in detail in the Handover-Sim Paper (section III). In short, rather than relying on the human hand to physically move the object, we augment the object models by adding additional actuators to their base such that their 6D pose can be directly actuated by controllers in simulation. The hand-object motions are then "replayed" using data from DexYCB (with collisions between hand and object being disabled).

Learning a universal grasping method is indeed challenging. In our latest work, we scale to many more objects for handovers (than available in the DexYCB dataset) by learning a general RL-based dexterous grasping policy.

FredTrumpSenior commented 5 months ago

Thanks for your answer! It helps a lot!