uc-vision / taichi-splatting

Apache License 2.0
95 stars 11 forks source link

How to get viewspace point grad and radii as the original 3dgs training does? #4

Closed fishfishson closed 9 months ago

fishfishson commented 9 months ago

Hi author,

Thx for sharing your great repo! I want to use it to train my custom data for speedup. However, I cannot find viewspace_point_grad and radii in your implementation, which is used to update the densification states in the original 3dgs training. Could you pls tell me how to get these vars?

oliver-batchelor commented 9 months ago

Hi there! Thanks for taking a look.

Previously these were only available if you used the lower-level parts rather than render_gaussians - I hope to make this more into a toolkit where someone can use the lower level parts to assemble their own render function. Nonetheless I added a couple of things to expose this through the top level render_gaussians. These are only in latest git commit.

The viewspace gradient can be found by taking the gradient on the 2d gaussians. For example:

rendering = render_guassians(....)
rendering.gaussians2d.retain_grad(True)

loss = ....
loss.backward()

viewspace_grad = torch.norm(rendering.gaussians2d.grad[:, :2], dim=-1)

I added a compute_radii argument... so render_gaussians(..., compute_radii=True) will now give you a.radii` attribute on the result.

These are both only computed for the gaussians in the view (those in rendering.points_in_view which returns the indexes into the 3D gaussians)

Let me know if that makes sense and how you get on.

fishfishson commented 9 months ago

Hi author,

I follow your instructions and here's my densification implementation based on radii and gaussians_2d:

points_in_view = rendering.points_in_view
radii = rendering.radii
gaussians_2d = rendering.gaussians_2d
assert gaussians_2d.shape[1] == 6, f"Expected 2D gaussians, got {gaussians_2d.shape}"
assert gaussians_2d.grad is not None, "Expected gradients on gaussians2d, run backward first with gaussians2d.retain_grad()"
gs.max_radii2D[points_in_view] = torch.max(gs.max_radii2D[points_in_view], radii)
gs.xyz_gradient_accum[points_in_view] += torch.norm(gaussians_2d.grad[..., :2], dim=-1, keepdim=True)
gs.denom[points_in_view] += 1

However, in my data, I find the magnitude of viewspace_grad in your repo is 100 order smaller than that of original 3DGS raster. So the desification doesn't work since xyz_gradient_accum never exceeds the densify_grad_threshold and gets reset in densification step. Do you konw why the viewspace_grad is so small?

oliver-batchelor commented 9 months ago

I'll take a look - there could be some error or some-other reason as I haven't seen how their viewspace gradient is actually calculated in the original.

I originally thought the reference implementation used NDC coordinates, but I'm not sure about that. My implementation follows the taichi_3d_gaussians_splatting which it derives from.

On Tue, Feb 27, 2024 at 3:14 AM yuzy @.***> wrote:

Hi author,

I follow your instructions and here's my densification implementation based on radii and gaussians_2d:

points_in_view = rendering.points_in_viewradii = rendering.radiigaussians_2d = rendering.gaussians_2dassert gaussians_2d.shape[1] == 6, f"Expected 2D gaussians, got {gaussians_2d.shape}"assert gaussians_2d.grad is not None, "Expected gradients on gaussians2d, run backward first with gaussians2d.retain_grad()"gs.max_radii2D[points_in_view] = torch.max(gs.max_radii2D[points_in_view], radii)gs.xyz_gradient_accum[points_in_view] += torch.norm(gaussians_2d.grad[..., :2], dim=-1, keepdim=True)gs.denom[points_in_view] += 1

However, in my data, I find the magnitude of viewspace_grad in your repo is 100 order smaller than that of original 3DGS raster. So the desification doesn't work since xyz_gradient_accum never exceeds the densify_grad_threshold and gets reset in densification step. Do you konw why the viewspace_grad is so small?

— Reply to this email directly, view it on GitHub https://github.com/uc-vision/taichi-splatting/issues/4#issuecomment-1964245375, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAITRZM7XSI65JKQ5XHSVYDYVSKFBAVCNFSM6AAAAABDXXTP22VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRUGI2DKMZXGU . You are receiving this because you commented.Message ID: @.***>

oliver-batchelor commented 9 months ago

I have since done some tests with gradient checking (torch.autograd.gradcheck) using double precision on the rasterizer. There seems to be a few edge cases (maybe just numeric issues), but I don't see anything which would lead to 100x differences from the correct magnitude!

So I'm at a loss as to what might be different.

On Tue, Feb 27, 2024 at 3:14 AM yuzy @.***> wrote:

Hi author,

I follow your instructions and here's my densification implementation based on radii and gaussians_2d:

points_in_view = rendering.points_in_viewradii = rendering.radiigaussians_2d = rendering.gaussians_2dassert gaussians_2d.shape[1] == 6, f"Expected 2D gaussians, got {gaussians_2d.shape}"assert gaussians_2d.grad is not None, "Expected gradients on gaussians2d, run backward first with gaussians2d.retain_grad()"gs.max_radii2D[points_in_view] = torch.max(gs.max_radii2D[points_in_view], radii)gs.xyz_gradient_accum[points_in_view] += torch.norm(gaussians_2d.grad[..., :2], dim=-1, keepdim=True)gs.denom[points_in_view] += 1

However, in my data, I find the magnitude of viewspace_grad in your repo is 100 order smaller than that of original 3DGS raster. So the desification doesn't work since xyz_gradient_accum never exceeds the densify_grad_threshold and gets reset in densification step. Do you konw why the viewspace_grad is so small?

— Reply to this email directly, view it on GitHub https://github.com/uc-vision/taichi-splatting/issues/4#issuecomment-1964245375, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAITRZM7XSI65JKQ5XHSVYDYVSKFBAVCNFSM6AAAAABDXXTP22VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRUGI2DKMZXGU . You are receiving this because you commented.Message ID: @.***>

fishfishson commented 9 months ago

Thx for your kind help. I compare your implementation with nerfstudio(gsplat) and find in splatfacto.py:

avg_grad_norm = (self.xys_grad_norm / self.vis_counts) * 0.5 * max(self.last_size[0], self.last_size[1])

where last_size is the rendered image size. So I multiply the grad norm with 0.5x image size following gsplat and find the grad norm seems to be consistent with original 3dgs.

oliver-batchelor commented 9 months ago

Thanks! Much appreciated and useful to know.

It was useful to write the test in any case to be more sure.

Figured there had to be something like that but didn't see it anywhere. I've been experimenting with different kinds of split control which doesn't use an absolute threshold - because it is very finicky, for just this reason!