Open ZhuoxiaoLi opened 4 months ago
Hi, did you produce this result using our full evaluation scripts?
I checked this, it looks like
Thanks for your quick reply!!!
I used the same Settings as in full_eval.py, which is "--quiet -- test_iterations-1 --depth_ratio 1.0 --lambda_dist 1000". render Settings are also from full_eval.py: --quiet --skip_train --depth_ratio 1.0 --num_cluster 1. I will go through the detailed parameters I defined.
Also mentioned in the article, distloss is set to 100 for outdoor scenes, but you set it to default 0 because it might cause rendering blur on someone else's custom dataset. So, to replicate the results on the Mip 360 dataset in the original paper, do I need to reset dits loss to 100 as well?
Again, thanks for your fantastic work!
From my experiments, it seems that the distortion loss has little effect on the MipNeRF360 dataset because MipNeRF's 360 has fewer illumination changes. You can report the performance with the default parameters, or plus the distortion, or even removing any regularizations, based on your experimental setting.
Thank you very much for your help and timely reply!
Hi, I recently deployed 2DGS for large-scale scene reconstruction with almost no modifications (only referencing Vast Gaussian's partitioned training strategy). The extracted mesh is excellent!
Awesome!
Hi,
Following the Settings in your excellent article and the GOF article, we set --lambda_dist to 1000 on the DTU dataset (because we consider it an indoor scene), but the mesh extraction results differ from those in your article. Can you provide instructions for setting the correct training parameters?