Open huiqiang-sun opened 1 week ago
Thanks for raising this! To simplify our pipeline, we skipped camera pose estimation on dynamic real-world videos (like the panda scene) and set camera extrinsics to an identity matrix. Our method compensates for this by learning to move background Gaussians in a way that mimics camera motion.
This approach has a drawback: it lacks a concept of a "stationary" camera. A potential workaround is aligning the camera trajectory with average background motion.
Nevertheless, recognizing the need for a global coordinate frame and pre-computed camera poses, I'm releasing code next week that integrates COLMAP for this purpose. This will make visualizing a static camera pose much simpler.
In the meantime, let me know if you have any other questions!
Thanks for your reply! I am looking forward to the code with COLMAP.
Hi, thanks for the great work!
I want to test the model on the panda scene in YouTube-VOS which the input video's camera is not stationary. Specifically, I want to render a camera-fixed and time-change video. But I don't know how to get stationary camera video results. Can you provide the code that renders stationary camera video from camera-moving scenes?
Besides, we found that the camera extrinsic of each frame are same in the panda dataset you provided, but the camera of the input video is obviously changing. What is the reason?
Looking forward to your reply!