Closed LinzhouLi closed 2 weeks ago
Hi,
The latest code is much faster and the training could be done in ~20 mins on average on the DTU dataset with an A100 GPU. This is slower than 2DGS because more points are used in our model due the abs grad metric (See our paper for details).
I just rerun the code with an commit branch and get following results:
The results is very close to what we report in the paper and it is slightly better.
However, with the latest commit, the results is as follows:
And this is consistent with what you get. I think there might be just some randomness during different trains or precision issues after we merge common computations in the latest commit. I am not sure what causes the differences. But if you want to reproduce the results in the paper, you can use the old version which is slower.
The GT point cloud in DTU dataset in is not complete and usually we need to filter the mesh based on the object masks. However, the tetrahedral mesh could not be filtered completely and the resulting mesh is very big which make the evaluation very slow. Therefore, we use TSDF fusion for the DTU dataset.
You need to zoom in to the middle of the mesh in meshlab since the tetrahedral mesh also contains the background region.
Thanks for your answer! I found the extracted mesh.
Thanks for your great work! But I found some issues about evaluation on DTU dataset.
evalute_dtu_mesh.py
seems using mesh extracted by TSDF but not by Tetrahedral Grid method introduced in the paper.scan24 mesh extracted by Tetrahedral Grid method![image](https://github.com/autonomousvision/gaussian-opacity-fields/assets/71712265/1e355d03-2c39-422e-83b2-1881bc945ca6)
Could you please give me some hints to fix these issues?