nerfstudio-project / nerfstudio

A collaboration friendly studio for NeRFs
https://docs.nerf.studio
Apache License 2.0
9.34k stars 1.26k forks source link

Wrong Result in Depth Map Extraction #3082

Open ferphy opened 5 months ago

ferphy commented 5 months ago

I've been trying to extract the depth map from a NeRF, specifically from the dataset downloaded by running the line: 'ns-download-data nerfstudio --capture-name=poster" and training it with the Nerfacto model. After reading different issues, I've tried 2 different methods:

The first one involved adding within render.py (https://github.com/nerfstudio-project/nerfstudio/blob/c599d0f3a1580d4276f4f064fb9cb926eb3b1a71/nerfstudio/scripts/render.py#L156C1-L198C51) Starting from line 157, after the 'if is_depth' check, the call to a function that would store the output_image into a numpy array. I extracted this from the following issue: (https://github.com/nerfstudio-project/nerfstudio/issues/2147) This method resulted in the following image, which as you can see, does not provide sufficient information to extract the depth data for each pixel.

depth_map_00011 (1)

The second method involved modifying render.py again, but this time on line 145, that is, before 'render_image = []', adding the following:

Code_3AiTxqiAKa

This second method resulted in new information that does not resemble what I need:

depth_map_00009 (1)

I extracted this from the following issue: (https://github.com/nerfstudio-project/nerfstudio/issues/1388)

I'm not exactly sure what the problem is causing the incorrect extraction of depth map information from my NeRF. I would be grateful for any suggestions to solve this issue. Thanks.

kerrj commented 5 months ago

Does depth look reasonable in the output render of the viewer? also if you render a pointcloud of the scene does that look reasonable? pointcloud rendering uses depth under the hood to deproject points.

It looks like what might be happening is the floor/walls have fuzzy geometry, which can happen in NeRF when there are highly reflective and texture-less surfaces (eg the floor).

ferphy commented 5 months ago

Sorry for replying so late. I've been exploring various approaches to achieve a satisfactory outcome. I believe the issue was related to depth accumulation. After removing that parameter, I obtained the following:

depth_map_00056

It's more accurate but it has a really low definition, maybe This could be due to the artifacts surrounding the NeRF which are restricting my range. Are there any solutions to minimize the interaction of artifacts within the depth map range?

Lizhinwafu commented 2 days ago

When using Nerfstudio for data preprocessing, it generates a transform.json file, and I have also obtained the RGB and depth images for each view, which I used to generate point clouds for each view. How can I use the transform.json file to register the point clouds from multiple views into a single unified point cloud?