Open Hupangzi01 opened 5 days ago
They have slight difference. In the training_report
function, PSNR is directly calculated in float32; But in the evaluation phase, PSNR is calculated from the stored images, which were uint8
quantized before. This precision difference may lead to variance in PSNR.
That Explains it.Thanks for your enthusiastic reply!I have another question, since the opacity, rgb, scale, and quatrn of the neural Gaussian are predicted by MLP, and the number of parameters of MLP is very small, since this is the case, why don't you also predict the position of the neural Gaussian by MLP, so that there is no need for offsets and scaling as well?
Thanks for your suggestion! However, we experimentally find MLP is not good at predicting "positions" in our model, and also, "positions" of Gaussians are highly sensitive to deviations since they significantly affect the rendering process. It is a worthwhile research direction, we hope some better approaches could be developed to address this problem.
Hey, such an excellent work! I have a small doubt about psnr, that is, after the training phase, psnr is calculated in both the render phase and the evaluate phase, theoretically both the values should be the same, but why are they different?