nerfstudio-project / nerfstudio

A collaboration friendly studio for NeRFs
https://docs.nerf.studio
Apache License 2.0
9.48k stars 1.29k forks source link

What is the algorithm for exporting point clouds? #2694

Open BellalaLv opened 10 months ago

BellalaLv commented 10 months ago

The nerf is a neural network representation of the model, so how did you get its explicit point cloud?

maturk commented 10 months ago

@BellalaLv , by rendering depth maps for each input frame, and backprojecting points from the depth maps into world coordinates.

BellalaLv commented 10 months ago

@BellalaLv , by rendering depth maps for each input frame, and backprojecting points from the depth maps into world coordinates.

Do depth estimation for each input frame? And what does this have to do with Nerf? Is there a paper that specifically implements it?

maturk commented 10 months ago

@BellalaLv In a NERF, for each frame you can render a depth in the same way as you render a rgb color essentially. The depth comes from the expected ray termination according to the rendering equation. For example, if you run the ns-train nerfacto model, you can look at the rendered rgb and depth channels. Depth comes for free essentially, so you dont have to do much extra work to get an estimated depth per frame. There are a few papers that deal with depth and Nerfs, you can google around or check the nerfstudio source code.

BellalaLv commented 10 months ago

@BellalaLv In a NERF, for each frame you can render a depth in the same way as you render a rgb color essentially. The depth comes from the expected ray termination according to the rendering equation. For example, if you run the ns-train nerfacto model, you can look at the rendered rgb and depth channels. Depth comes for free essentially, so you dont have to do much extra work to get an estimated depth per frame. There are a few papers that deal with depth and Nerfs, you can google around or check the nerfstudio source code.

In your framework, do you get the same depth map for different models?

maturk commented 10 months ago

@BellalaLv, I dont understand your question. Each nerf is trained on a specific dataset, so the color and depth renders will be specific to that data. There are differences in nerf models though, like nerfacto vs instant-ngp vs zipnerf vs gaussian-splatting so if you change the model, the resulting color and depth renders will vary.