Open mhfan opened 5 months ago
To me it looks like the beginning and end of the antialiased section is extending further than it should? The start and end look thicker than the rest of the line
I believe there are two things going on here. It's part of our longer term roadmap to address them and improve quality, but in the short term I think we'll stick with the basic spec for antialiasing, which is essentially to do the AA calculations in sRGB colorspace, and do compositing after antialiased path rendering.
The first of the issues is the choice of colorspace for doing the antialiasing. The most defensible from a physical rendering perspective is a linear colorspace. For a simple black-on-white vector shape, that's approximately equivalent to applying a gamma curve of 2.2, which I've done below:
Earlier versions of piet-gpu in fact did this, doing all alpha compositing in a linear sRGB space, then doing conversion to devices sRGB at the end. There are two problems with this.
The first is that you get alpha compositing results that don't match expected results. In particular, if you composite a black and a white layer at 50% alpha, you get 0.5 linear light intensity, which is 0.735 in device sRGB, or #BCBCBC
. Most users will expect #7F7F7F
. This is discussed a bit in the RAVG paper, Figure 3 in particular, and also Figure 14 of the MPVG paper.
This particular issue can be addressed by effectively doing the compositing at a higher resolution (using alpha rules appropriate for the document), then downsampling in a linear color space. That's more computationally intensive.
The second, though, is that doing compositing in the "correct" space does not always look nicer. In particular, looking at the visual examples of MPVG, it is clear that much of the black-on-white text looks anemic and spindly. I believe that a good solution to this problem will involve "stem thickening" to counteract this tendency. And in fact, that is one of the motivations of the recent stroke expansion work, though we have not yet enabled thickening of filled shapes in the pipeline.
A second and related problem is the choice of box filter for reconstruction. According to the theory of Mitchell and Netravali, with followup by Nehab and Hoppe, there is no single ideal sampling filter, only a tradeoff space, with blurriness, ringing, and aliasing as three points in a triangle. The box filter is commonly used because it is computationally efficient, but it also represents an appealing point in this tradeoff space for most 2D vector graphics, including most font rendering. The exception is very thin lines, where aliasing is visible as a stepped-like appearance. In the limit, a very thin line, a box sampling filter is equivalent to a non-antialiased line of single pixel width (but a much lower alpha opacity to compensate for the effective fraction of the pixel covered).
One approach is to use a sampling filter that goes a little towards blurriness and away from aliasing in this tradeoff space - a tent filter is a good choice. That's also more computationally intensive, and leads to quality degradation for the vector content that isn't thin strokes.
My sense about the best way to reconcile all this is to do some preprocessing of the scene, increasing stroke width for very thin strokes, while also decreasing the alpha to avoid changing the perceived darkness (probably not all the way to preserving total intensity). And indeed, a single pixel wide line sampled with a box filter renders to a similar result as a thin line sampled with a tent filter. Very likely, some combination of increasing stroke width for strokes and applying stem thickening to fills will yield the best perceived quality and minimize discrepancies between the two primitives.
I have ideas on how to do compositing free of conflation artifacts, which would fully address being able to do alpha compositing of document colors and antialiasing in different color spaces. I should write that up as an issue or a design document. My current thinking is that it makes sense to take this on after sparse strip path rendering, as it would potentially be a lot more efficient to render the path once and handle the compositing sparsely.
Sorry if this is not as immediately useful to just trying to get good quality out of the renderer as it exists today, but hopefully it helps explain what's going on, and seeing these examples does help motivate the more sophisticated rendering ideas. And maybe there are some things to try, in particular using thicker strokes at partial alpha.
Thanks for the detailed explanation!
The box filter is commonly used because it is computationally efficient, but it also represents an appealing point in this tradeoff space for most 2D vector graphics, including most font rendering. The exception is very thin lines, where aliasing is visible as a stepped-like appearance.
Make sense.
Sorry if this is not as immediately useful to just trying to get good quality out of the renderer as it exists today,
This is definitely actionable.
And maybe there are some things to try, in particular using thicker strokes at partial alpha.
Setting a partial alpha on the border color does indeed visually improve the quality. Thanks!
For my use cases in testing Vello with Floem I was able to thicken thin strokes and apply an alpha factor and the result is effective.
Please click on each of the following three screenshots to see that compared to Chrome and Femtovg, the Vello rendering results have obvious small jagged edges on the beard.
PS: Femtovg rendering results also had small jagged, similar to Vello, but after changing the
dpi_factor
to 1.0, it seems to be not inferior to Chrome.https://github.com/femtovg/femtovg/blob/4e40a61f824a8ea1bd361b4d227a2187e912124a/examples/svg.rs#L145