# post-process offsets to get centers for gaussians
offsets = offsets * scaling_repeat[:,:3]
xyz = repeat_anchor + offsets
`
What does this stands for scaling_repeat[:,3:], i have found that this is from grid_scaling initialize by
scales = torch.log(torch.sqrt(dist2))[...,None].repeat(1, 6)
Could you explain the role of each dimension in scaling_repeat, especially in the context of the slicing [:,3:] and [:,:3]?
The [:,:3] controls the step size of offset. The [:,3:] serves as the base scale for neural gaussian's shape, which means the cov MLP learn a residual scales.
upon this code section
` # post-process cov scaling = scaling_repeat[:,3:] torch.sigmoid(scale_rot[:,:3]) # (1+torch.sigmoid(repeat_dist)) rot = pc.rotation_activation(scale_rot[:,3:7])
`
What does this stands for scaling_repeat[:,3:], i have found that this is from grid_scaling initialize by
scales = torch.log(torch.sqrt(dist2))[...,None].repeat(1, 6)
Could you explain the role of each dimension in scaling_repeat, especially in the context of the slicing [:,3:] and [:,:3]?Thanks