Open wongshek opened 3 months ago
Check this thread out, it's an implementation consideration. The result is compliant with the original paper.
https://github.com/graphdeco-inria/gaussian-splatting/issues/762
Can you illustrate more, I have the same question, and didn't quite get the relevance of $\Sigma$ being correct with this formula being correct
I got it, OpenGL is column-majored, so every matrix is actually the transpose of the matrix as in paper
I got it, OpenGL is column-majored, so every matrix is actually the transpose of the matrix as in paper
Just saw this, but yeah. Because for implementation, it's better to use one matrix for both GLM and also pytorch, where GLM use R^T and pytorch use R. Therefore the implementation in 3DGS use R^T(done with quaternion to matrix process), just written in R throughout the program.
But as mentioned, S^T = S, so with R^T written as R will actually make GLM and pytorch work correctly at the same time. So it could be seen as a clever implementation consideration.
I wanna know why the formula $J^\top W^\top \Sigma^\top WJ$ in this line is not $J W \Sigma W^\top J^\top$. It seems that both the paper and the subsequent gradient calculations use $J W \Sigma W^\top J^\top$?