Open Hongyuan-Tao opened 7 months ago
Hi, It's normal on self-captured datasets since the resolution is high and the scene is unbounded requiring a lot of Gaussians modeling both the foreground and background. During demo making, I personally prefer to use masked images to train the model. In this way, only the foreground is reconstructed and the training speed will be as fast as DNeRF (as the editing demos w/o background in the teaser image).
OK Thank you for your quick reply.Best wishes for your research!
Hello, Thank you for your amazing work! I have some doubts when I am trying to train my own colmap dataset by SCGS. Here's the thing:I want to model the whole scene(both dynamic and static) but not distinguish dynamic and static scene by dynamic masks.So I used the following command:
--source_path /data/colmap/hand --model_path outputs/colmap/hand --deform_type node --node_num 512 --hyper_dim 8 --eval --gt_alpha_mask_as_scene_mask --local_frame --resolution 2 --W 800 --H 800
The first 2,000 training rounds were fast.But after 7500 rounds,it became slower and slower.Here's the speed at 21,000 rounds: Even if I don't choose to complete all 80,000 rounds,it still need about one and a half hours to obtain scene of higher quality. I hope to know if it is normal or there have some wrongs in my operation. Thank you for taking the time to answer my questions!!!