Open zzz5y opened 1 month ago
In that such a reconstruction - you got only bearable result if you move reconstruction camera same way as you drive a car. And this isn't algorytm fail, but lack of useful data.
How to improve:
This way you got more coverage of the scene and your end effect will be more coherent with better perspective (less floaters, good flat surfaces etc.)
In that such a reconstruction - you got only bearable result if you move reconstruction camera same way as you drive a car. And this isn't algorytm fail, but lack of useful data.
How to improve:
- add side cameras and longer video.
- drive back same way to take 'counter' video
This way you got more coverage of the scene and your end effect will be more coherent with better perspective (less floaters, good flat surfaces etc.)
Thank you for your advice!It helps me a lot!
Hi,
I'm facing a similar issue with relatively low-res driving images where I get a sort of trail of blurry gaussians that densify directly in front of my render camera. All the initial gaussians generated from my initial point-cloud that are in the FoV of my images are pruned in the process (not the ones outside of my camera FoV weirdly enough).
Does anyone already encountered this kind of issue ?
Thanks in avance, Best,
Try lower position learning rate (fine-grained textures) and lower densification interval (more points) in argument classes.
However, you need more camera/view data.
Hi! Thank you for your amazing work! Hi communit,i am dealing driving scenes reconstrcuctions.In 3dgs rebuild i got a bad result. Are there any idea to improve my work? Very appreciate for you kindly help! I train kitti360 in 60k or 100k iterations. and I use colmap to generate the points3d.ply instead of the dataset ply here are my bad results Hopefully to receive your ideas!Thank you