jnhwkim / Pensees

A collection of fragments for reading research papers.
MIT License
6 stars 0 forks source link

🦾 NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields #13

Open jnhwkim opened 1 year ago

jnhwkim commented 1 year ago

NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields

Yen-Chen et al., ICRA 2022 Code | Project | Video | Paper

Thin, reflective objects such as forks and whisks are common in our daily lives, but they are particularly chal-lenging for robot perception because it is hard to reconstruct them using commodity RGB-D cameras or multi-view stereo techniques. While traditional pipelines struggle with objects like these, Neural Radiance Fields (NeRFs) have recently been shown to be remarkably effective for performing view synthesis on objects with thin structures or reflective materials. In this paper we explore the use of NeRF as a new source of supervision for robust robot vision systems. In particular, we demonstrate that a NeRF representation of a scene can be used to train dense object descriptors. We use an optimized NeRF to extract dense correspondences between multiple views of an object, and then use these correspondences as training data for learning a view-invariant representation of the object. NeRF's usage of a density field allows us to reformulate the correspondence problem with a novel distribution-of-depths formulation, as opposed to the conventional approach of using a depth map. Dense correspondence models supervised with our method significantly outperform off-the-shelf learned descriptors by 106% (PCK@3px metric, more than doubling performance) and outperform our baseline supervised with multi-view stereo by 29%. Furthermore, we demonstrate the learned dense descriptors enable robots to perform accurate 6-degree of freedom (6-DoF) pick and place of thin and reflective objects.

🔑 Key idea:

💪 Strength:

😵 Weakness:

🤔 Confidence: