Xiangyu1Sun / Factorize-3DGS

https://xiangyu1sun.github.io/Factorize-3DGS/
MIT License
30 stars 1 forks source link

Reproducing results #2

Open soon0698 opened 2 weeks ago

soon0698 commented 2 weeks ago

Hi, Thanks for the great work!

I'm interested in your approach, and I have two questions.

1) For the (alpha-masked) TandT dataset, we need the point histogram results from the original 3D-GS, but there does not seem to be a formalized way to learn this dataset with the original 3D-GS (to read bbox.txt etc).

I would like to ask how you learned it.

2) In the paper present results for the Mip-360 dataset, but I failed to reproduce the results (e.g., for bonsai and CP, 27.34 dB. VM-96 is much worse). Perhaps some hyperparameter tuning like learning rate is needed.

Can I know the detailed conditions you used in your experiments?

Thanks.

Xiangyu1Sun commented 1 week ago

Thank you for your interest in our work and sorry for the late response.

  1. For some unbounded scenes like Tanks&Temples and Mip-360 dataset, we use the min and max coordinates from the pre-trained 3DGS point cloud as the boundings.

  2. For Mip-360 paradigm, some parts of the points distribution in the 360 scene are sparse and others are dense. So we construct factorized coordinates in the histogram bin thich contains points exceeding the threshold lambda. We set lambda as 32 and average interval of bins as 0.1. After getting the pre-trained 3DGS primitives, we follow 4×4 factorized coordinates (lambda > 32), combined with freedom coordinates (lambda < 32, freedom coordinates mean 1×1, same as original 3DGS primitives).