facebookresearch / 3D-Vision-and-Touch

When told to understand the shape of a new object, the most instinctual approach is to pick it up and inspect it with your hand and eyes in tandem. Here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to fusing vision and touch, which leverages advances in graph convolutional networks. To do so, we introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects. Our results show that (1) leveraging both vision and touch signals consistently improves single-modality baselines, especially when the object is occluded by the hand touching it; (2) our approach outperforms alternative modality fusion methods and strongly benefits from the proposed chart-based structure; (3) reconstruction quality boosts with the number of grasps provided; and (4) the touch information not only enhances the reconstruction at the touch site but also extrapolates to its local neighborhood.
Other
68 stars 14 forks source link

Sheet #5

Closed jeffsonyu closed 2 years ago

jeffsonyu commented 2 years ago

Hello! I noticed that the npys in scene_info folders in your dataset have the keys of "cam_pos", and you still have the sheets folder which includes the finger positions? Could you explain to me the difference between these two? Appriciate it!

jeffsonyu commented 2 years ago

I think I have figured out the former problem. Do the sheets npys stand for the local points touch on the finger tips? And what is the difference between sheets and the 'points' in scene_info? Do they mean different point clouds from the surfaces of the objects, and the pixels from the sensor?

EdwardSmith1884 commented 2 years ago

Hi so there are basically 3 aspects of the training. -> In the first I take the touch reading from a finger and the pose of the hand to learn what the local surface at the touch surface looks like. This prediction outputs a point cloud. -> In the second phase I convert this point cloud into a mesh "sheet" ( which is just a small mesh surface) using optimization to match the predicted point cloud. This provides a mesh "sheet" representing the surface at every touch site in the dataset. -> In the third phase I use these sheets to represent the known touch information and use them to predict a full object.

The 'points' in scene_info are the ground truth local surfaces at each touch site used to train the first phase. Because I figured people wouldn't want to train all 3 stages I provided the sheets directly in 'sheets npys' which allows you to only train or test the final stage if you want to.

Does that make sense? Let me know if you have any other questions!

jeffsonyu commented 2 years ago

OK! Now I totally understand! Really appreciate it!