Closed Yef-Huang closed 2 months ago
I have the same questions too.
@Yef-Huang @chenll12345 Thanks for your interest.
@TQTQliu Thank you for your very detailed answer, I understand everything! Do you think it is necessary to introduce 3DGS in the first layer? Or do you want to introduce some loss? The purpose of the first level is depth estimation, so can introducing 3DGS bring any benefits to depth estimation? If it doesn't bring any benefits, wouldn't it be better to remove the 3DGS network from the first level?
@chenll12345 Yes, the introduction of 3DGS in the first layer to render low-resolution views is necessary, and the introduction of loss between low-resolution rendered views and GT is beneficial for depth estimation. Because I did an experiment to remove the GS rendering part of the first stage, and found that the final view quality metrics and depth accuracy would decrease (I did not present this ablation experiment in the paper). In addition, since we only introduce GS rendering in the first level during the training phase, we did not use it when testing or inference, so this does not affect the inference time.
Thank you for your response. I sincerely hope that this excellent work will be open-sourced.
Thanks for your attention, the code has been released.
Great work! Are the number of sampling points in your NeRF module the same as the number of points in 3DGS, or are they the same points? Is the number of sampling points in the final level 2? Is the first level used only for depth estimation and does not introduce 3DGS? How do you handle the density of Gaussian points—are they predicted through MLP or mapped using PDF?