NVlabs / handover-sim

A simulation environment and benchmark for human-to-robot object handovers
https://handover-sim.github.io
BSD 3-Clause "New" or "Revised" License
87 stars 14 forks source link

About Hand and Object Segmentation #17

Closed aikuide closed 3 months ago

aikuide commented 3 months ago

Hi! In the paper, it is mentioned that the perception module obtained RGBD images and segmented images to obtain segmented point clouds of objects and hands. I would like to know how to obtain segmented images?

ychao-nvidia commented 3 months ago

If you are referring to the simulation experiments, the segmentation images are directly rendered from the simulation engine (Bullet).

aikuide commented 3 months ago

Sorry, I may not be familiar with what you mean by rendering. Does it require neural network training on the dataset of hands and objects to achieve segmentation

ychao-nvidia commented 3 months ago

IIRC Bullet uses an OpenGL based renderer. This is classical rendering and does not require neural network.

aikuide commented 3 months ago

微信截图_20240520104028 Thank you very much for your patient answer. So how can I obtain such segmented images in the paper

ychao-nvidia commented 3 months ago

The segmented image is obtained in this line. You should be able to check it if you run the demo.

aikuide commented 3 months ago

So what do you mean is that you can directly obtain such segmented images through rendering and then pass them directly to the perception module? In my understanding, real-time segmentation of hands and objects is also a difficult task

ychao-nvidia commented 3 months ago

You can directly obtain such segmented images through rendering and then pass them directly to the perception module?

Yes.

Real-time segmentation of hands and objects is also a difficult task.

It does not matter in simulation. In the real world, yes.

aikuide commented 3 months ago

Thank you for your patient answer