MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures
Chen et al., CVPR 2023
Neural Radiance Fields (NeRFs) have demonstrated amazing ability to synthesize images of 3D scenes from novel views. However, they rely upon specialized volumetric rendering algorithms based on ray marching that are mismatched to the capabilities of widely deployed graphics hardware. This paper introduces a new NeRF representation based on textured polygons that can synthesize novel images efficiently with standard rendering pipelines. The NeRF is represented as a set of polygons with textures representing binary opacities and feature vectors. Traditional rendering of the polygons with a z-buffer yields an image with features at every pixel, which are interpreted by a small, view-dependent MLP running in a fragment shader to produce a final pixel color. This approach enables NeRFs to be rendered with the traditional polygon rasterization pipeline, which provides massive pixel-level parallelism, achieving interactive frame rates on a wide range of compute platforms, including mobile phones.
🔑 Key idea:
"This NeRF is represented as a set of polygons with textures representing binary opacities and feature vectors."
Let's touch the elephant:
In training stage 1, the free variables are vertex (local; move only a little) locations $\mathcal{V} \in [-.5,+.5]^{P \times P \times P \times 3}$, while the topology is fixed as quadrangle (two triangles). It's a very restricted learning compared to vanilla NeRFs.
💪 Strength:
Taking advantage from polygon-based modeling and rendering
😵 Weakness:
There are barriers to understanding this work without prior knowledge.
MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures
Chen et al., CVPR 2023
🔑 Key idea:
💪 Strength:
😵 Weakness:
🤔 Confidence: