Open fzy28 opened 2 months ago
Aliasing occurs when we don't have sufficient pixels for rasterizing a 2D Gaussian. Typically, a 2D Gaussian can degenerate to a very small point (from a distant view) or a line (from a slanted view). For example, when a 2D Gaussian falls between pixels, it will not be rendered and cannot be optimized. Then it will be dead.
(An example showing that a 2D Gaussian falls between pixels)
Therefore, we should add a low-pass filter for anti-aliasing. The filter size has an impact on the optimization: large filter size encourages faster convergency (more gradients) but discourage high-frequencies. You can see our early experiments on the filter size here.
In the main paper, you mentioned that 2DGS might degenerate to a line in screen space.
And for stabilize optimization, you use a object-space low-pass filter.
I wonder how and why this special case would influence the optimization? Why we cannot just omit these Gaussian while optimization?