graphdeco-inria / gaussian-splatting

Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/
Other
14.44k stars 1.89k forks source link

Can 3DGS render depth maps? #983

Open Lizhinwafu opened 1 month ago

Lizhinwafu commented 1 month ago

Can 3DGS render depth maps?

jaco001 commented 1 month ago

This is 'almost' point cloud so you can render depth map. (eg https://www.techrxiv.org/users/687263/articles/680135-three-to-two-a-novel-approach-for-converting-3d-point-clouds-into-2d-depth-maps )

MultiTrickFox commented 1 month ago

Yes it can do depth calculations in this repo's own render function

Lizhinwafu commented 1 month ago

Thanks.

Lizhinwafu commented 1 month ago

I have another question, are the RGB and depth maps rendered at the same view aligned? Will the 3D points generated by the RGB and depth maps rendered at any view find corresponding points in the 3DGS model?

MultiTrickFox commented 1 month ago

Yes the renderer takes in a view direction, outputs the produced image as well as depth from that view. It can be any view, in case of trainer its a training view, but it can be anything. look at here: https://github.com/graphdeco-inria/gaussian-splatting/blob/c643c864416c5258c76cd9f0eed55203834cbf83/train.py line 94 and 106.

Lizhinwafu commented 1 month ago

Thank you for your answer. Since the depth map and RGB map can be rendered, why can't I get a point cloud with uniform density?

AsherJingkongChen commented 3 weeks ago

Thank you for your answer. Since the depth map and RGB map can be rendered, why can't I get a point cloud with uniform density?

Because "Gaussian" Splatting

jlartois commented 3 weeks ago

For anyone else that wants to render depth maps during training, you need to switch to the dev branch. You will indeed get access to invDepth = render_pkg["depth"], for example in train.py. This comes from submodules/diff-gaussian-rasterization/diff_gaussian_rasterization/__init__.py (note that this is the dr_aa branch of that submodule).

However, how do we go from invDepth to the actual depth? Just invert it? From here, I derive that indeed the actual depth is just 1/invDepth, and this is the "Z-depth". However, these depth maps seem to be far from the ground truth. I noticed this when I used these depth maps to warp one view to a neighboring view, and the image loss was significant.

Here is an example depth map generated by GS for truck, view 000108.jpg, trained for 7k iterations:

truck_000108 000108

I think it might have to do with these huge splats that influence the depth incorrectly.

I would appreciate any insight into this.

jlartois commented 2 weeks ago

I tried some things to get rid of the splats that were negatively impacting the depth map. I pruned away splats based on a combination of:

The resulting depth map (shown below) is much more accurate. I.e., when I use it to warp the view to a neighboring view, the image loss is very low. truck_000108

Side note: I think this means that the current implementation of depth normalization is flawed (unless you can reliably prune the incorrect splats).