Open BellalaLv opened 10 months ago
@BellalaLv , by rendering depth maps for each input frame, and backprojecting points from the depth maps into world coordinates.
@BellalaLv , by rendering depth maps for each input frame, and backprojecting points from the depth maps into world coordinates.
Do depth estimation for each input frame? And what does this have to do with Nerf? Is there a paper that specifically implements it?
@BellalaLv In a NERF, for each frame you can render a depth in the same way as you render a rgb color essentially. The depth comes from the expected ray termination according to the rendering equation. For example, if you run the ns-train nerfacto model, you can look at the rendered rgb and depth channels. Depth comes for free essentially, so you dont have to do much extra work to get an estimated depth per frame. There are a few papers that deal with depth and Nerfs, you can google around or check the nerfstudio source code.
@BellalaLv In a NERF, for each frame you can render a depth in the same way as you render a rgb color essentially. The depth comes from the expected ray termination according to the rendering equation. For example, if you run the ns-train nerfacto model, you can look at the rendered rgb and depth channels. Depth comes for free essentially, so you dont have to do much extra work to get an estimated depth per frame. There are a few papers that deal with depth and Nerfs, you can google around or check the nerfstudio source code.
In your framework, do you get the same depth map for different models?
@BellalaLv, I dont understand your question. Each nerf is trained on a specific dataset, so the color and depth renders will be specific to that data. There are differences in nerf models though, like nerfacto vs instant-ngp vs zipnerf vs gaussian-splatting so if you change the model, the resulting color and depth renders will vary.
The nerf is a neural network representation of the model, so how did you get its explicit point cloud?