Open ptg666 opened 1 year ago
You can use open3d for visualization.
short answer: not exactly.
One problem is what you showed: the messy background.
I don't know how much your task tolerate noises, but another problem with gaussian splatting is that for featureless regions (e.g. white wall), instead of many points on the region, it tends to only have few points with large scales, which is very different from point clouds you can get from e.g. multiview stereo or lidar.
@kwea123 Just seed more points at this areas and you get more precision GS. @ptg666 Meshlab isn't good for checking GS. But you can use it before trainning to clean/seed first point cloud (from colmap).
@jaco001 even if you seed more, they will eventually be removed by pruning, unless you fixed them
Just for some visualization of the result for demonstration, I think it is still valuable to use the point cloud generated after the Gaussian optimization.
The left ones are initial input point clouds generated by the SfM, while the right ones are generated by Densification after the Gaussian optimization. Even with the pruning, we can see a more dense point cloud compared to others. To prevent pruning, you can even disable the densification.
But of course, the colourful-detailed regions will be denser than featureless regions.
short answer: not exactly.
One problem is what you showed: the messy background.
I don't know how much your task tolerate noises, but another problem with gaussian splatting is that for featureless regions (e.g. white wall), instead of many points on the region, it tends to only have few points with large scales, which is very different from point clouds you can get from e.g. multiview stereo or lidar.
My task requires centimeter-accurate point clouds and cannot tolerate noise. @kwea123
Just for some visualization of the result for demonstration, I think it is still valuable to use the point cloud generated after the Gaussian optimization.
The left ones are initial input point clouds generated by the SfM, while the right ones are generated by Densification after the Gaussian optimization. Even with the pruning, we can see a more dense point cloud compared to others. To prevent pruning, you can even disable the densification.
But of course, the colourful-detailed regions will be denser than featureless regions.
In the truck scene it doesn't work.Is it possible that the initial point cloud quality is not very good? @ljjTYJR
@ljjTYJR May I ask what software is used to visualize this? and which point cloud file is imported? I imported "output/name/point_cloud/iretation_30000/point_cloud.ply" into meshlab as shown in the first image. I expected to be able to produce the effect shown in the second image.
'Easiest' way is use SIBR_gaussianViewer_app.exe with white background. Meshlab don't use all data stored in point_cloud.ply from GS. (for now)
@yangqinhui0423
I read the generated 3D Gaussian point cloud similar to the way of https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/gaussian_model.py#L215-L257
You can use the gaussian means as point coordination and convert the feature to the RGB color.
For visualization, I simply use the open3d
to visualize the generated point cloud.
Just for some visualization of the result for demonstration, I think it is still valuable to use the point cloud generated after the Gaussian optimization.
The left ones are initial input point clouds generated by the SfM, while the right ones are generated by Densification after the Gaussian optimization. Even with the pruning, we can see a more dense point cloud compared to others. To prevent pruning, you can even disable the densification.
But of course, the colourful-detailed regions will be denser than featureless regions.
Sir,May I ask How to get the point cloud with color?
@ljjTYJR May I ask what software is used to visualize this? and which point cloud file is imported? I imported "output/name/point_cloud/iretation_30000/point_cloud.ply" into meshlab as shown in the first image. I expected to be able to produce the effect shown in the second image.
@yangqinhui0423 Sir, did you solve this problem?
@luoxue-star @hanhantie233
Briefly speaking, after training the Gaussians, we can achieve a *.ply
file, which has spherical harmonics parameters attached to each Gaussian.
To recover RGB color in each Gaussian, we can use the SH2RGB function in the code.
To visualize the colored point cloud, I use the open3d
package.
@ljjTYJR Thank you for your response. I encountered some negative values when using the SH2RGB function to obtain color. I use [f_dc_0, f_dc_1, f_dc_2] as inputs for the SH2RGB function. Could you please tell me why?
@ljjTYJR Thank you for your response. I encountered some negative values when using the SH2RGB function to obtain color. I use [f_dc_0, f_dc_1, f_dc_2] as inputs for the SH2RGB function. Could you please tell me why?
If you use the spherical harmonics in the training, using SH2RGB
will only convert the base SH parameters with freedom=0
@luoxue-star Hello, have you get the correct color? I have the same question QAQ
@ljjTYJR Thank you for your response. I encountered some negative values when using the SH2RGB function to obtain color. I use [f_dc_0, f_dc_1, f_dc_2] as inputs for the SH2RGB function. Could you please tell me why?
If you use the spherical harmonics in the training, using
SH2RGB
will only convert the base SH parameters with freedom=0
@ljjTYJR Could you please explain it detailedly? I use the following step to recover color but only a few points have color and not accurate
it's the color point cloud i get
it's the dense colmap reconstruction point cloud
I want to acquire the point cloud with color like colmap
您好,您有正确的颜色吗?我有同样的问题 QAQ
感谢您的回复。使用 SH2RGB 函数获取颜色时,我遇到了一些负值。我使用 [f_dc_0, f_dc_1, f_dc_2] 作为 SH2RGB 函数的输入。你能告诉我为什么吗?
如果在训练中使用球谐波,则 using 将仅转换 freedom=0 的基本 SH 参数
SH2RGB
你能详细解释一下吗?我使用以下步骤来恢复颜色,但只有几个点有颜色并且不准确
- 使用 Open3D 加载 PLY 文件
- 加载f_dc_0 f_dc_1 f_dc_2并使用 SH2RGB 恢复颜色,我只使用 freedom=0 的基本 SH 参数
- 因此,我得到 NX3 Numpy,它仍然有负值,然后我使用与作者相同的方式torch.clamp_min
- 将步骤 3 归一化为 [0,1] 并乘以 255
- 转移到 np.uint8
- xyz -> pcd.points 颜色->pcd.colors
- 保存 PCD 文件并使用 Cloud Compare 进行可视化
这是我得到的彩色点云
它是密集的 Colmap 重建点云
我想获取像 colmap 一样具有颜色的点云
Sorry, I'm a newbie. How does the second step work? How to read color data? How to use the SH2RGB function?
Did it get solved? I want to be able to view the RGB ply file in meshlab
Did it get solved? I want to be able to view the RGB ply file in meshlab
Hello, I analyzed the source code, we have to extract the RGB information, the following is my code.
`path = "E:\RenProject\gaussian-splatting\output\cht\point_cloud\iteration_30000\point_cloud.ply" plydata = PlyData.read(path) xyz = np.stack((np.asarray(plydata.elements[0]["x"]), np.asarray(plydata.elements[0]["y"]), np.asarray(plydata.elements[0]["z"])), axis=1) features_dc = np.zeros((xyz.shape[0], 3, 1)) features_dc[:, 0, 0] = np.asarray(plydata.elements[0]["f_dc_0"]) features_dc[:, 1, 0] = np.asarray(plydata.elements[0]["f_dc_1"]) features_dc[:, 2, 0] = np.asarray(plydata.elements[0]["f_dc_2"]) f_d = np.transpose(features_dc, axes=(0,2,1)) f_d_t = f_d[:, 0, :] pcd = o3d.geometry.PointCloud() pcd.points = o3d.utility.Vector3dVector(xyz) pcd.colors = o3d.utility.Vector3dVector(f_d_t)
o3d.io.write_point_cloud("cht-color.ply",pcd)`
Since the Gaussian kernel will exponentially fall off from the mean, there will be some color outperforming 1. Here is the code I used to deal with this:
plydata = PlyData.read(gs_path)
xyz = np.stack((np.asarray(plydata.elements[0]["x"]),
np.asarray(plydata.elements[0]["y"]),
np.asarray(plydata.elements[0]["z"])), axis=1)
features_dc = np.zeros((xyz.shape[0], 3, 1))
features_dc[:, 0, 0] = np.asarray(plydata.elements[0]["f_dc_0"])
features_dc[:, 1, 0] = np.asarray(plydata.elements[0]["f_dc_1"])
features_dc[:, 2, 0] = np.asarray(plydata.elements[0]["f_dc_2"])
rgb = SH2RGB(features_dc[..., 0])
# clamp the lower bound of rgb values to 0
rgb = np.maximum(rgb, 0)
opacities = np.asarray(plydata.elements[0]["opacity"])[..., np.newaxis]
opacities = self.sigmoid(opacities)
opacity_mask = (opacities > 0.005).squeeze(1)
xyz = xyz[opacity_mask]
rgb = rgb[opacity_mask]
# for point with rgb values large than 1, we need to rescale all channels by making the largest channel 1
max_rgb = np.max(rgb, axis=1)
max_rgb = np.maximum(max_rgb, 1)
rgb = rgb / max_rgb[:, np.newaxis]
# for checking
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(xyz)
pcd.colors = o3d.utility.Vector3dVector(rgb)
o3d.io.write_point_cloud("test.ply",pcd)
Hope this code can help others!
I am relatively new to coding. for the line with opacities=self.sigmoid(opacities). What is the self and which class is it from?
I am relatively new to coding. for the line with opacities=self.sigmoid(opacities). What is the self and which class is it from?
It is just a sigmoid function by using numpy:
def sigmoid(x):
return 1 / (1 + np.exp(-x))
可以参考gaussian_render中的init.py下的 `# If precomputed colors are provided, use them. Otherwise, if it is desired to precompute colors
shs = None
colors_precomp = None
if override_color is None:
if pipe.convert_SHs_python:
shs_view = pc.get_features.transpose(1, 2).view(-1, 3, (pc.max_sh_degree+1)**2)
dir_pp = (pc.get_xyz - viewpoint_camera.camera_center.repeat(pc.get_features.shape[0], 1))
dir_pp_normalized = dir_pp/dir_pp.norm(dim=1, keepdim=True)
sh2rgb = eval_sh(pc.active_sh_degree, shs_view, dir_pp_normalized)
colors_precomp = torch.clamp_min(sh2rgb + 0.5, 0.0)
else:
shs = pc.get_features
else:
colors_precomp = override_color`
把这段代码,复制到gaussian model.py的save_ply下,同时也要修改construct_list_of_attributes,save_ply加入一个save_ply(self, path, viewpoint_camera);train.py中的save也要加上scene.save(iteration, viewpoint_cam);同理scene的init下也要save(self, iteration, viewpoint_cam)。最后的代码是 `def construct_list_of_attributes(self): l = ['x', 'y', 'z', 'nx', 'ny', 'nz']
for i in range(self._features_dc.shape[1]*self._features_dc.shape[2]):
l.append('f_dc_{}'.format(i))
for i in range(self._features_rest.shape[1]*self._features_rest.shape[2]):
l.append('f_rest_{}'.format(i))
l.append('opacity')
for i in range(self._scaling.shape[1]):
l.append('scale_{}'.format(i))
for i in range(self._rotation.shape[1]):
l.append('rot_{}'.format(i))
# Add color attributes
l.append('red')
l.append('green')
l.append('blue')
return l`
并且save_ply中修改为 ` ......
# from SHs in Python, do it. If not, then SH -> RGB conversion will be done by rasterizer.
shs_view = self.get_features.transpose(1, 2).view(-1, 3, (self.max_sh_degree+1)**2)
dir_pp = (self.get_xyz - viewpoint_camera.camera_center.repeat(self.get_features.shape[0], 1))
dir_pp_normalized = dir_pp/dir_pp.norm(dim=1, keepdim=True)
sh2rgb = eval_sh(self.active_sh_degree, shs_view, dir_pp_normalized)
# colors_precomp = torch.clamp_min(sh2rgb + 0.5, 0.0).cpu().numpy()
# colors_precomp = (torch.clamp_min(sh2rgb, 0.0)* 255).cpu().numpy()
# 确保颜色值被正确映射到0-255的范围
colors_precomp = (torch.clamp_min(sh2rgb + 0.4, 0.0)* 255).cpu().numpy()
dtype_full = [(attr, 'f4') if attr not in ['red', 'green', 'blue'] else (attr, 'u1') for attr in self.construct_list_of_attributes()]
# dtype_full = [(attribute, 'f4') for attribute in self.construct_list_of_attributes()]
elements = np.empty(xyz.shape[0], dtype=dtype_full)
attributes = np.concatenate((xyz, normals, f_dc, f_rest, opacities, scale, rotation, colors_precomp), axis=1)
......` 可以通过自己调整sh2rgb+XXX得到,后续用CloudCompare与meshlab都可以,meshlab中shading选择None或者Dot都OK。 或者使用CloudCompare,可以参考https://github.com/graphdeco-inria/gaussian-splatting/issues/674#issuecomment-1968267819,然后在属性栏点击RGB即可
问题解决了吗?我希望能够在 meshlab 中查看 RGB ply 文件
你弄出来了吗?
Rendering results are great on truck dataset, but exported point cloud is messy.I can't even find where the truck is. So my question is whether the point cloud derived from 3D Gaussian is suitable for point cloud-based tasks?