Open yunjinli opened 2 months ago
1) for cloud point extracting, we first undistort the videos. other wise the point cloud will be poor.
2) In the paper, we conduct mutlipe 50 frames similar as hyperreel for immersive dataset. (0-50, 50-100,100-150...) except some video's length is not 300 frames. We didn't train 300 frames in a single model for immersive dataset. I also heard some feedback that current codebase performs not well at 300 frames or custome dataset. I infered this should be caused by temporal opacity. A possible solution to improve motion should be per frame training dynamic3dgaussians or a filter to filter the points 4DGS.
Dear authors,
First, thank you for the amazing work you have done :) I'm also working on project with Dynamic Gaussians.
I'm also using the google immersive dataset, however, one thing I faced during initializing the sparse point cloud from colmap is that the resulting sparse point cloud is relatively sparse (around 100 points) by using the first frames from all cameras except for camera_0001. I'm wondering if you also have such issue when simply using point_triangulater from colmap (for me, only 02_Flames works, however, the rest of the sequences fail to generate a proper point cloud).
The other question I have is the training of all 300 frames in google immersive dataset. Have you also conducted such experiment before on google immersive dataset? In the paper, I only saw 50-frame evaluation. In my project, using 50-frames seems to be working ok, however, when extending the training to 300 frames, my reconstruction for the motion starts deteriorate... Not sure if your approach also has similar issue on this dataset.
Thank you, and I look forward to your reply.
Best, Jim