I'm having a weird issue that the results trained on GT camera poses are a lot worse than colmap-estimated ones. Basically, the results with GT poses are rougher/more sketchy than using the colmap pose. The context here is just for static scene, basically for all the dynamic method mentioned here I just train only single frame.
From GT pose:
From colmap-estimated pose:
To clarify, I still put the GT poses in colmap and follow here to get the pointcloud, and this GT pose and pointcloud works well (and even better than colmap-estimated ones) with:
NeRF/nerfstudio
Dynamic3DGaussians
It works same bad with:
3DGS original implementation
4DGaussians
I suspect that something is wrong in camera conversion because both the original 3DGS and your implementation uses the same camera class https://github.com/hustvl/4DGaussians/blob/master/scene/cameras.py#L17. But honestly my cameras are so simple, just pinhole cameras. I cannot think of anywhere could go wrong. Curious to know your thoughts on this! Thank you very much!
Hey,
I'm having a weird issue that the results trained on GT camera poses are a lot worse than colmap-estimated ones. Basically, the results with GT poses are rougher/more sketchy than using the colmap pose. The context here is just for static scene, basically for all the dynamic method mentioned here I just train only single frame.
From GT pose: From colmap-estimated pose:
To clarify, I still put the GT poses in colmap and follow here to get the pointcloud, and this GT pose and pointcloud works well (and even better than colmap-estimated ones) with:
It works same bad with:
I suspect that something is wrong in camera conversion because both the original 3DGS and your implementation uses the same camera class https://github.com/hustvl/4DGaussians/blob/master/scene/cameras.py#L17. But honestly my cameras are so simple, just pinhole cameras. I cannot think of anywhere could go wrong. Curious to know your thoughts on this! Thank you very much!