graphdeco-inria / gaussian-splatting

Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/
Other
14.62k stars 1.91k forks source link

difference between "padded_grad" and "torch.norm(grads, dim=-1)" when perform densification #767

Open NeutrinoLiu opened 6 months ago

NeutrinoLiu commented 6 months ago

hi when i compare the condition of split densification and clone densification, i found the definition of "too large gradient" is slightly between the two.

For split, the mask is generated by https://github.com/graphdeco-inria/gaussian-splatting/blob/472689c0dc70417448fb451bf529ae532d32c095/scene/gaussian_model.py#L354 while for clone, the mask is generated by https://github.com/graphdeco-inria/gaussian-splatting/blob/472689c0dc70417448fb451bf529ae532d32c095/scene/gaussian_model.py#L376 I am not quite sure about the functionality of "padded_grad" here. considering the arguments passed to this two functions are identical, is there any difference between this two method to filter out large gradient gaussians? Thx

KEVIN09876 commented 6 months ago

Same question => Solved

The shape of tensor grads is (N, 1) where N is the total number of points. So torch.norm(grads, dim=-1) doesn't change the gradient values

NeutrinoLiu commented 6 months ago

Same question => Solved

The shape of tensor grads is (N, 1) where N is the total number of points. So torch.norm(grads, dim=-1) doesn't change the gradient values

confusing that they use two different representation for the same functionality, but anyway, thx.