yihua7 / SC-GS

[CVPR 2024] Code for SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes
https://yihua7.github.io/SC-GS-web/
MIT License
502 stars 28 forks source link

Mask #48

Closed zhenyuan1234 closed 4 months ago

zhenyuan1234 commented 4 months ago

Hi, how do you separate dynamic and static backgrounds for D-NeRF dataset. I look forward to your reply, many thanks!

yihua7 commented 4 months ago

Hi, If you meant how to distinguish the dynamic and static GS, you should use the option --gs_with_motion_mask, which adds a binary dynamic attribute to each Gaussian modeling their dynamics. An extra sparsity loss can be added to make statics parts with nearly zero masks.

zhenyuan1234 commented 4 months ago

Hi, thanks for your kind reply! I mean how do we distinguish which part is dynamic and which part is static. What are the rules to distinguish between the two. Similarly, why does the extra sparse loss work only on the static part, thanks!

yihua7 commented 4 months ago

The mask is learned during the optimization. The predicted motion of each Gaussian is multiplied by the mask and static parts tend to have a small mask value and vice-versa. You can refer to the code here. The sparsity loss is not implemented in the current code. I mentioned it because static parts may have a large mask value but get a zero-motion prediction, which also makes the final masked motion zero and the mask unable to distinguish dynamic from static. That's why I suggested introducing an L1 sparsity loss on all Gaussians to avoid such a phenomenon.

If with the option `--gt_alpha_mask_as_dynamic_mask', the dynamic mask will also be supervised by the given GT mask.