Open Lizhinwafu opened 1 month ago
This is 'almost' point cloud so you can render depth map. (eg https://www.techrxiv.org/users/687263/articles/680135-three-to-two-a-novel-approach-for-converting-3d-point-clouds-into-2d-depth-maps )
Yes it can do depth calculations in this repo's own render function
Thanks.
I have another question, are the RGB and depth maps rendered at the same view aligned? Will the 3D points generated by the RGB and depth maps rendered at any view find corresponding points in the 3DGS model?
Yes the renderer takes in a view direction, outputs the produced image as well as depth from that view.
It can be any view, in case of trainer its a training view, but it can be anything.
look at here: https://github.com/graphdeco-inria/gaussian-splatting/blob/c643c864416c5258c76cd9f0eed55203834cbf83/train.py
line 94 and 106.
Thank you for your answer. Since the depth map and RGB map can be rendered, why can't I get a point cloud with uniform density?
Thank you for your answer. Since the depth map and RGB map can be rendered, why can't I get a point cloud with uniform density?
Because "Gaussian" Splatting
For anyone else that wants to render depth maps during training, you need to switch to the dev branch. You will indeed get access to invDepth = render_pkg["depth"]
, for example in train.py. This comes from submodules/diff-gaussian-rasterization/diff_gaussian_rasterization/__init__.py
(note that this is the dr_aa
branch of that submodule).
However, how do we go from invDepth
to the actual depth? Just invert it? From here, I derive that indeed the actual depth is just 1/invDepth
, and this is the "Z-depth". However, these depth maps seem to be far from the ground truth. I noticed this when I used these depth maps to warp one view to a neighboring view, and the image loss was significant.
Here is an example depth map generated by GS for truck, view 000108.jpg, trained for 7k iterations:
I think it might have to do with these huge splats that influence the depth incorrectly.
I would appreciate any insight into this.
I tried some things to get rid of the splats that were negatively impacting the depth map. I pruned away splats based on a combination of:
The resulting depth map (shown below) is much more accurate. I.e., when I use it to warp the view to a neighboring view, the image loss is very low.
Side note: I think this means that the current implementation of depth normalization is flawed (unless you can reliably prune the incorrect splats).
Can 3DGS render depth maps?