graphdeco-inria / gaussian-splatting

Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/
Other
14.96k stars 1.96k forks source link

Question about densification with split and clone #1020

Open leizhenyu-lzy opened 1 month ago

leizhenyu-lzy commented 1 month ago

In the source code shown below I can see a bias is add to the new_xyz when doing split, but it seems that when doing clone new gaussian is just using the same properties, it doesn't look like the image shown in the paper(shown below), and will these 2 gaussians do the same thing(transform in the same way) in the training process and the result will not be the same shown in the picture below?

Thank you for your help and reply.

image

    def densify_and_split(self, grads, grad_threshold, scene_extent, N=2):
        n_init_points = self.get_xyz.shape[0]
        # Extract points that satisfy the gradient condition
        padded_grad = torch.zeros((n_init_points), device="cuda")
        padded_grad[:grads.shape[0]] = grads.squeeze()
        selected_pts_mask = torch.where(padded_grad >= grad_threshold, True, False)
        selected_pts_mask = torch.logical_and(selected_pts_mask,
                                              torch.max(self.get_scaling, dim=1).values > self.percent_dense*scene_extent)

        stds = self.get_scaling[selected_pts_mask].repeat(N,1)
        means =torch.zeros((stds.size(0), 3),device="cuda")
        samples = torch.normal(mean=means, std=stds) 
        rots = build_rotation(self._rotation[selected_pts_mask]).repeat(N,1,1)
        new_xyz = torch.bmm(rots, samples.unsqueeze(-1)).squeeze(-1) + self.get_xyz[selected_pts_mask].repeat(N, 1)  # add bias
        new_scaling = self.scaling_inverse_activation(self.get_scaling[selected_pts_mask].repeat(N,1) / (0.8*N))
        new_rotation = self._rotation[selected_pts_mask].repeat(N,1)
        new_features_dc = self._features_dc[selected_pts_mask].repeat(N,1,1)
        new_features_rest = self._features_rest[selected_pts_mask].repeat(N,1,1)
        new_opacity = self._opacity[selected_pts_mask].repeat(N,1)

        self.densification_postfix(new_xyz, new_features_dc, new_features_rest, new_opacity, new_scaling, new_rotation)

        prune_filter = torch.cat((selected_pts_mask, torch.zeros(N * selected_pts_mask.sum(), device="cuda", dtype=bool)))
        self.prune_points(prune_filter)

    def densify_and_clone(self, grads, grad_threshold, scene_extent):
        # Extract points that satisfy the gradient condition
        selected_pts_mask = torch.where(torch.norm(grads, dim=-1) >= grad_threshold, True, False)
        selected_pts_mask = torch.logical_and(selected_pts_mask,
                                              torch.max(self.get_scaling, dim=1).values <= self.percent_dense*scene_extent)

        new_xyz = self._xyz[selected_pts_mask]
        new_features_dc = self._features_dc[selected_pts_mask]
        new_features_rest = self._features_rest[selected_pts_mask]
        new_opacities = self._opacity[selected_pts_mask]
        new_scaling = self._scaling[selected_pts_mask]
        new_rotation = self._rotation[selected_pts_mask]

        self.densification_postfix(new_xyz, new_features_dc, new_features_rest, new_opacities, new_scaling, new_rotation)
AsherJingkongChen commented 1 month ago

Either those small splats or remaining splats are treated in the same way in the next iteration or later. I think those small splats can have 0.5 std bias, which would make them different.

djx99 commented 1 month ago

Yes, it seems that when clone, the original gauss is copied exactly new_xyz = self._xyz[selected_pts_mask] and an offset is not added. Can anyone explain this issue?

Either those small splats or remaining splats are treated in the same way in the next iteration or later. I think those small splats can have 0.5 std bias, which would make them different.

Why do they have 0.5 std bias, what does 0.5 std bias mean? Thank you very much.

AsherJingkongChen commented 1 month ago

Why do they have 0.5 std bias, what does 0.5 std bias mean? Thank you very much.

@djx99 I forgot that the original implementation used different deviations on every point. I simplified that by using std = 1.0. This context should be irrelevant to the original questions.

I think cloning can have lower deviation than splitting, but there should be no big difference since scales are small.

You can try it by copying parts of splitting and multiply a real number less than 1.0.

djx99 commented 1 month ago

Thank you for your reply, I have just started learning 3DGS and had the above question when reading about this function. If I have time, I will try it; I don't think it will affect the results too much. Thanks again.