hugoycj / 2.5d-gaussian-splatting

an unofficial 2DGS implementation based on GauStudio
https://github.com/GAP-LAB-CUHK-SZ/gaustudio
Other
220 stars 4 forks source link

Questions about zeroing the third scale components #6

Closed wbstx closed 2 months ago

wbstx commented 2 months ago

Thanks for your work! I have some questions about how the third component of scale is set zero through optimization. I checked the code and find that you add a line of code for scale initialization in gaussian_model.py L135 scales[:, 2] = inverse_sigmoid(torch.tensor(0))

I have two questions:

  1. As the activation for scale is exp, why inverse_sigmoid is used here? Although the results for log(0) and inverse_sigmoid(0) are the same.
  2. How is the third component kept unchanged from the initialized 0 during optimization? I did not find where the code stops the gradient for the third component of the scale.

I would really appreciate your reply, Thanks!

hugoycj commented 2 months ago

This line is actually a bug that needs to be addressed. An interesting point to note is that this bug would cause the third scaling factor to become -inf, resulting in the gradient being undefined (None). Consequently, the third value would remain unchanged and the scaling after activation is zero.

wbstx commented 2 months ago

Thanks for your quick reply! I just realized that the third component will keep 0 also due to the exp activation, as we expect exp(x) to be zero even though log(0) is undefined. But it is interesting to see that things go on in an unexpected way.