autonomousvision / gaussian-opacity-fields

Gaussian Opacity Fields: Efficient and Compact Surface Reconstruction in Unbounded Scenes
https://niujinshuchong.github.io/gaussian-opacity-fields/
Other
559 stars 26 forks source link

Questions about DTU evaluation #60

Closed LinzhouLi closed 2 weeks ago

LinzhouLi commented 2 weeks ago

Thanks for your great work! But I found some issues about evaluation on DTU dataset.

  1. Trainning is slow. I use the latest code and conduct my evaluation on RTX 3090 GPU. Trainning one scan in DTU dataset is about 1~1.5 hour, however 2DGS trainning is only ~10min.
  2. I can't reproduce the Chamfer distance numbers in the paper. (0.74 in the paper vs. 0.80 I reproduced)
  3. The code in evalute_dtu_mesh.py seems using mesh extracted by TSDF but not by Tetrahedral Grid method introduced in the paper.
  4. Fail to extract mesh in DTU dataset by Tetrahedral Grid method
reproduce results on DTU dataset 24 37 40 55 63 65 69 83 97 105 106 110 114 118 122 mean
GOF (in paper) 0.50 0.82 0.37 0.37 1.12 0.74 0.73 1.18 1.29 0.68 0.77 0.90 0.42 0.66 0.49 0.74
GOF (I reproduced) 0.54 0.85 0.36 0.38 1.30 0.85 0.78 1.20 1.32 0.73 0.76 1.24 0.46 0.68 0.51 0.80
2DGS (I reproduced) 0.47 0.81 0.33 0.37 0.94 0.85 0.78 1.31 1.24 0.67 0.67 1.44 0.40 0.66 0.47 0.76

scan24 mesh extracted by Tetrahedral Grid method image

Could you please give me some hints to fix these issues?

niujinshuchong commented 2 weeks ago

Hi,

  1. The latest code is much faster and the training could be done in ~20 mins on average on the DTU dataset with an A100 GPU. This is slower than 2DGS because more points are used in our model due the abs grad metric (See our paper for details).

  2. I just rerun the code with an commit branch and get following results: Screenshot from 2024-06-13 15-47-27 The results is very close to what we report in the paper and it is slightly better. However, with the latest commit, the results is as follows: Screenshot from 2024-06-13 15-50-42 And this is consistent with what you get. I think there might be just some randomness during different trains or precision issues after we merge common computations in the latest commit. I am not sure what causes the differences. But if you want to reproduce the results in the paper, you can use the old version which is slower.

  3. The GT point cloud in DTU dataset in is not complete and usually we need to filter the mesh based on the object masks. However, the tetrahedral mesh could not be filtered completely and the resulting mesh is very big which make the evaluation very slow. Therefore, we use TSDF fusion for the DTU dataset.

  4. You need to zoom in to the middle of the mesh in meshlab since the tetrahedral mesh also contains the background region.

LinzhouLi commented 2 weeks ago

Thanks for your answer! I found the extracted mesh. image