facebookresearch / 3D-Vision-and-Touch

When told to understand the shape of a new object, the most instinctual approach is to pick it up and inspect it with your hand and eyes in tandem. Here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to fusing vision and touch, which leverages advances in graph convolutional networks. To do so, we introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects. Our results show that (1) leveraging both vision and touch signals consistently improves single-modality baselines, especially when the object is occluded by the hand touching it; (2) our approach outperforms alternative modality fusion methods and strongly benefits from the proposed chart-based structure; (3) reconstruction quality boosts with the number of grasps provided; and (4) the touch information not only enhances the reconstruction at the touch site but also extrapolates to its local neighborhood.
Other
68 stars 14 forks source link

how to use the dataset? #3

Closed sinjutin closed 2 years ago

sinjutin commented 2 years ago

Excuse me, I'd like to know how to use the ABC dataset for trainning? I don't know what the exp_type and exp_id is?the name of the directory?

sinjutin commented 2 years ago

Excuse me, I'd like to know how to use the ABC dataset for trainning? I don't know what the exp_type and exp_id is?the name of the directory?

I have made it, sorry to interrupt.

EdwardSmith1884 commented 2 years ago

everything is clear then? do you need any other help?

sinjutin commented 2 years ago

everything is clear then? do you need any other help?

No, thank you.