Anttwo / SuGaR

[CVPR 2024] Official PyTorch implementation of SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering
https://anttwo.github.io/sugar/
Other
2.36k stars 182 forks source link

can we use this type of regularization to flatten gauss-point ball? #3

Open yuedajiong opened 1 year ago

yuedajiong commented 1 year ago

import torch x = torch.tensor([1.0], requires_grad=True) #internal auxiliary parameter for X y = torch.tensor([1.1], requires_grad=True) #internal auxiliary parameter for Y optimizer = torch.optim.SGD([x,y], lr=1.0) for i in range(100): X = torch.sigmoid(x) #compent_x @scale Y = torch.sigmoid(y) #compent_y @scale
reg1 = torch.nn.functional.mse_loss(X+Y, torch.tensor([1.0])) #x+y=1 reg2 = 1./(XX + YY) #one >> other(s) mse0 = 1.0 #GS optimization (dummy)
loss = reg1*10. + reg2 + mse0 optimizer.zero_grad() loss.backward() optimizer.step() print("i=%02d X=%.2f Y=%.2f reg1=%.4f reg2=%.4f loss=%.4f"%(i, X.tolist()[0], Y.tolist()[0], reg1.item(), reg2.item(), loss.item()))

Anttwo commented 1 year ago

Hello yuedajiong,

I'm not sure I get your question right; Could you give me more details about what you want to know?

Best

yuedajiong commented 1 year ago

Hi,Anttwo: I just want to find a simple regularization to flatten every single gauss-point. the regularization does not act to covarience, instead of component x-y-z in scale. I want optimize the ratio among scale xyz components close to: {small : bigger : bigger}.

above pseudo code wants to show: 5+5=10 1/(5x5 + 5x5) = 1/50

1+9=10 1/(1x1 + 9x9) = 1/82

1:9 better than 5:5

which one is 1, or 9, optimized by GS main network.

yuedajiong commented 11 months ago

Another reltated solution, FYI: (with normals)

Differentiable-Surface-Splatting-for-Point-based-Geometry-Processing https://studios.disneyresearch.com/wp-content/uploads/2019/10/Differentiable-Surface-Splatting-for-Point-based-Geometry-Processing.pdf

Anttwo commented 11 months ago

Hello yuedajiong,

So you mean that you want to flatten the Gaussians by enforcing one of the three scaling factors $(s_0, s_1, s_2)$ to be much smaller than the other two, right?

We actually tried some loss terms to explicitly enforce the smallest scaling factor to be close to 0 and flatten Gaussians (for example, $\cal{R} = \min_{i=0,1,2} |s_i|$) but we found out that such explicit flattening loss may be too destructive for efficient Gaussian optimization, and encourages falling in bad local minima.

Actually, we found that our regularization terms (either using density or SDF, as explained in the paper) naturally enforce Gaussians to flatten, in a non-destructive way.

But still, there may be a better regularization term to craft!

cdcseacave commented 11 months ago

Since each gaussian splat has a full 3D rotation, isn't enough for a successful training to always use only 2 scales and the third to be hard coded to 0 or very small? Did you try this approach and know how it compares to your implementation?

yuedajiong commented 11 months ago

Thanks @Anttwo

yuedajiong commented 11 months ago

@Anttwo

NeuSG: Neural Implicit Surface Reconstruction with 3D Gaussian Splatting Guidance

And my understanding: the gauss point(s) can not be very large. like fish scales.

yuedajiong commented 11 months ago

@Anttwo

NeuSG: used ||.||_1 (I am not sure this is proper, because I tried similar. Of course, these guys jointly used SDF.)

Me: tried another flatten regulirization, similar I mentioned above. (still falling short of expectations)

Maybe you are right, most of simple/direct regulizations are too destructive.