hustvl / 4DGaussians

[CVPR 2024] 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering
https://guanjunwu.github.io/4dgs/
Apache License 2.0
2.25k stars 186 forks source link

Training Parameters for Custom 360 MVS datasets #120

Closed azzarelli closed 7 months ago

azzarelli commented 7 months ago

Thank you to the authors for the paper!

I am currently trying to work with a challenging 360 MVS dataset and would like to know which parameters I should be tuning and what CLI inputs I should be using.

As there are no 360 MVS test scenes (or at least not many) in the 4DGS literature there are no prior works which indicate what might lead to better optimization, hence I would like to know if anyone has had any success with parameter tuning (to see if 4DGS can be tuned for these scenes).

My current configuration follows the Blender dataset as I am working with hemi-sphere like MVS camera placement. There are only 10 static cameras so an initial point-cloud is not recoverable using existing methods, so pc initialisation again follows the method used in data-loading Blender scenes.

I acknowledge this is a very challenging scene and I am not expecting high PSNR/SSIM, I'm only interested in seeing how good the results could be, hence my question about relevant parameters.

guanjunwu commented 7 months ago

Hi, you can modify here to change the initial bounding box of point clouds. As I know, you can try the longer coarse stage to train a better 3DGS to ensure the convergence of the model (maybe the motion part stays blurred). Then start training 4D Gaussians. If you want relatively camera poses and point clouds of the first frame, DUST3R may help you.

azzarelli commented 7 months ago

Thank you for the reply. I have modified the data loader and already have ground truth poses.

I will try longer coarse stage & close this issue.