Fictionarry / DNGaussian

[CVPR'24] DNGaussian: Optimizing Sparse-View 3D Gaussian Radiance Fields with Global-Local Depth Normalization
https://fictionarry.github.io/DNGaussian/
Other
263 stars 19 forks source link

Experimental configuration of the "More Input Views" #36

Open xuyx55 opened 2 months ago

xuyx55 commented 2 months ago

Hi! Thank you for your great work. I have questions about the "More Input Views" settings mentioned in the supplementary materials. I found it hard to reach the reported performance in Table 11 using the train_llff.sh configuration with 6 or 9 views input on the LLFF dataset. I wonder if it is possible to supply the experimental configuration of the "More Input Views".

Fictionarry commented 2 months ago

This seems to be the config for 6 and 9 views setting, which I found in my historical project. The main differences are at the iterations and error_tolerance. However, due to some adjustments after the code was released, especially on the updated rasterizer, I'm not sure if this can still fully match the current code, but I guess there is no big problem. Hope this can help you.

python train_llff.py  -s $dataset --model_path $workspace -r 8 --eval --n_sparse 6  --rand_pcd --iterations 12000 --lambda_dssim 0.2 \
            --densify_grad_threshold 0.0013 --prune_threshold 0.01 --densify_until_iter 9000 --percent_dense 0.01 \
            --position_lr_init 0.016 --position_lr_final 0.00016 --position_lr_max_steps 8500 --position_lr_start 500 \
            --split_opacity_thresh 0.1 --error_tolerance 0.01 \
            --scaling_lr 0.005 \
            --test_iterations 1000 2000 3000 4500 6000 9000 12000 \
            --shape_pena 0.002 --opa_pena 0.001 \
xuyx55 commented 2 months ago

Thanks for your reply! The configuration works for me.

xuyx55 commented 2 months ago

However, I met some new problems when reproducing the Blender Dataset experiments in the paper. I find that the provided configuration "/scripts/run_blender.sh" can't reach the reported performance.

The reproduced performance is PSNR:24.111, SSIM_sk:0.882, LPIPS:0.088, while the performance in the paper is PSNR:24.305 SSIM_sk:0.886 LPIPS:0.088. There is still a gap between the PSNR metrics.

I wonder if there are any ways to improve the PSNR metrics when setting the configuration. Or is there a renewed version of the configuration?

Fictionarry commented 2 months ago

However, I met some new problems when reproducing the Blender Dataset experiments in the paper. I find that the provided configuration "/scripts/run_blender.sh" can't reach the reported performance.

The reproduced performance is PSNR:24.111, SSIM_sk:0.882, LPIPS:0.088, while the performance in the paper is PSNR:24.305 SSIM_sk:0.886 LPIPS:0.088. There is still a gap between the PSNR metrics.

I wonder if there are any ways to improve the PSNR metrics when setting the configuration. Or is there a renewed version of the configuration?

Hi, I have just updated a new script. Now it can achieve the reported metrics and even better. Some scenes like lego and chair are not very stable sometimes, so you may need to run twice to get the ideal results.