Anttwo / SuGaR

[CVPR 2024] Official PyTorch implementation of SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering
https://anttwo.github.io/sugar/
Other
2.12k stars 151 forks source link

Increasing output quality to maxx #75

Open vararth opened 8 months ago

vararth commented 8 months ago

First of - a major thank you for combining surface reconstruction, texturing on top of Gaussian Splatting - it was bound to happen and I wanted to thank you for taking the initiative for the implementation.

I have managed to create both gaussian splats and .obj files through this amazing repo, though I did initially face issues with the setup to be honest (still having troubles with colmap, but using another machine now for generating the sparse dataset) - my question is about increasing the density of vertices for the high poly output.

At present, 1 million vertices is certainly more than enough for simple turntable videos, but I have noticed a degradation in quality of the exported meshes for relatively larger objects like the train from the original gaussian splatting sample dataset - in a lot of places the text is blurred since I presume there aren't enough faces for proper texture application.

If I could increase the vertices from 1,000,000 to 10,000,000 - I think it would really enhance the quality of the exported mesh and textures, could you please direct me to which script I could play around with to test this out?

Many thanks in advance :)

P.S. Images shared are just for comparing the Gaussian Splat and exported OBJ

screen screen-obj

Anttwo commented 8 months ago

Hello @vararth,

Thank you for your nice words!

Concerning the train dataset, as you may have observed, the geometry is not great in the sky area, just above the train (as I explained in this issue, this is actually due to the dataset being very challenging: The sky is completely texture-less and there is no pictures taken from above the train, so it is almost impossible to solve the ambiguity in the inverse projective problem with a pure optimization-based approach like NeRFs or Gaussian Splatting; Some learned or handcrafted priors are needed here).

Concerning the quality of the texture and details, there are two simple tools for improvement:

  1. The size of the texture: The number of vertices may actually be enough for a scene like this one; Sometimes, good looking details are not about the number of vertices, but rather the resolution of the texture image. Therefore, increasing the size of the texture may results in better-looking results, especially for the blurred text you mention. You can use the arg --square_size when running train.py to change the number of pixels used to map the mesh triangles to the texture image. The default value is 10, so you can try --square_size 20 to double the resolution of the texture.
  2. The number of vertices: As you suggested, increasing the number of vertices may improve the quality of the details. To change the number of vertices, you can use the arg --n_vertices_in_mesh when running train.py. The default value is 1_000_000. However, the number of vertices provided to the script is not an exact number but an upper bound: In practice, we first use the Poisson reconstruction algorithm with a depth parameter equal to 10 to reconstruct a dense mesh of the surface. Then, we decimate the mesh to fit the n_vertices_in_mesh provided by the user. Therefore, if you use --n_vertices_in_mesh 10_000_000, I doubt you would get so many vertices, as the Poisson reconstruction algorithm won't generate a mesh with so many verts (you would basically just skip the mesh decimation). If you really want to increase it, you may have to increase the poisson_depth value here. But keep in mind that using too many vertices in mesh extraction will require more memory during refinement.

In the end, I think increasing the size of the texture is the best option you have to get sharp text and details on the train. Looking forward to your answer!

Please refer to the README.md file for more information about the arguments of train.py, there is a dropdown list that provides more details about all the possible arguments. Here are a few examples related to your question:

mesh_extraction_args sugar_refinement_args

GongMingCarmen commented 8 months ago

@vararth the first picture is the cloud point? How to set parameters to get the same result as you? My result is terrible ![Uploading 企业微信截图_17043638822154.png…]()