maincold2 / Compact-3DGS

The official repository of Compact 3D Gaussian Representation for Radiance Field
Other
397 stars 20 forks source link

Save and Load Ply like original 3DGS #1

Open dlazares opened 9 months ago

dlazares commented 9 months ago

The saving and loading of the PLY are different than the original paper. Based on my read, I thought it should be possible to save it the standard way since you use the original paper's rasterizer.

I took a try at it but couldn't get it to work at first. I had to use the inverse scaling activation which doesn't seem right to me but it works 🤷🏽 . I noticed in the render that you use the activated get_scaling and the unactivated _scaling in the renderer here. https://github.com/maincold2/Compact-3DGS/blob/78d077a61b03bf32d7ec6bfba92b526d7192200d/gaussian_renderer/__init__.py#L58-L86

why is that? Also, sometimes you use the mask on the scale, could you explain that as well? which would be the proper one to save out?

Here's my save code that's working decently for me right now, but I'd love to improve it and get into a PR:

    def construct_full_list_of_attributes(self):
        l = ['x', 'y', 'z', 'nx', 'ny', 'nz']
        for i in range(3):
            l.append('f_dc_{}'.format(i))
        l.append('opacity')
        for i in range(self._scaling.shape[1]):
            l.append('scale_{}'.format(i))
        for i in range(self._rotation.shape[1]):
            l.append('rot_{}'.format(i))
        return l

    def save_true_ply(self, path):
        mkdir_p(os.path.dirname(path))

        xyz = self._xyz.detach().cpu().numpy()
        normals = np.zeros_like(xyz)
        opacities = self._opacity.detach().cpu().numpy()
        scale = self.scaling_inverse_activation(self._scaling).detach().cpu().numpy()
        #if self._mask.shape[0] > 0:
        #    scale = (self._scaling*self._mask).detach().cpu().numpy()
        #    opacities = (self._opacity * self._mask).detach().cpu().numpy()
        rotation = self._rotation.detach().cpu().numpy()
        dir_pp = torch.tensor([0,0,1],dtype=torch.float32).repeat(xyz.shape[0],1)
        dir_pp = dir_pp/dir_pp.norm(dim=1, keepdim=True)
        shs = self.mlp_head(torch.cat([self._feature, self.direction_encoding(dir_pp)], dim=-1))

        colors = shs.detach().cpu().numpy()
        #colors = torch.clamp(SH2RGB(shs) * 255,0,255)
        #colors_np = colors.detach().cpu().numpy().astype(np.uint8)
        dtype_full = [(attribute, 'f4') for attribute in self.construct_full_list_of_attributes()]
        elements = np.empty(xyz.shape[0], dtype=dtype_full)
        attributes = np.concatenate((xyz, normals,colors,opacities, scale, rotation), axis=1)
        elements[:] = list(map(tuple, attributes))
        el = PlyElement.describe(elements, 'vertex')
        PlyData([el]).write(path)
maincold2 commented 8 months ago

Thank you for your interest in our work and sorry for the late response.

We learn codebooks based on the activated attributes due to better performance. And at the end of training, we apply VQ on the _scaling and _rotation parameters and save them with PLY. This is because we reference unactivated parameters when rendering.

https://github.com/maincold2/Compact-3DGS/blob/a46a15ea8c600d8a24412871bae6da816dcb406e/scene/gaussian_model.py#L427-L434

As mentioned in the updated readme, colors are dependent on view direction, which is not supported by the original viewer. Your saving code seems to show non-optimal performance because it always uses arbitrary direction ([0,0,1]). We will try to support the proper viewer, and you can refer to other options in the readme at this time.

bluemoonwencong commented 4 months ago

I also want get the ply file that can open in 3DGSViewer, so I only applying masking (change a lot codes), but I find it is unstable in training stage, the num of points may become zero. could you give some advice ... @maincold2

maincold2 commented 4 months ago

I also want get the ply file that can open in 3DGSViewer, so I only applying masking (change a lot codes), but I find it is unstable in training stage, the num of points may become zero. could you give some advice ... @maincold2

Which dataset did you try? There is no required tuning when we don't use R-VQ and I-NGP, and I wonder about your dataset and hyperparameters regarding masking.

Thanks!