Myriad-Dreamin / tinymist

Tinymist [ˈtaɪni mɪst] is an integrated language service for Typst [taɪpst].
https://myriad-dreamin.github.io/tinymist
Apache License 2.0
711 stars 30 forks source link

Enhance text rendering in preview on low-resolution displays #540

Open Myriad-Dreamin opened 2 months ago

Myriad-Dreamin commented 2 months ago

Motivation and Description

The effect of rendered text is unsatisfied, especially on low-resolution displays. This is because we put glyphs on arbitrary position, resulting bad subpixel renderings. From experience of google fonts, typst/pixglyph, and the blog, we'd better rasterize them by ourselves and put them on finite fraction positions, like N + {0, 1/3, or 2/3} px.

image

A picture of a section of a glyph atlas that contains “m” glyphs rasterized at different sub-pixel alignments.

Myriad-Dreamin commented 2 months ago

related issue: https://github.com/Enter-tainer/typst-preview/issues/294.

Myriad-Dreamin commented 2 months ago

From some old experimental, the high performance way is to render and stored rendered bitmap of text line by line. Though the way, it can hold about 500,000 lines at the same time (on my PC).

memeplex commented 2 months ago

Just out of curiosity, does this also applies to other renderers like shiroa / typst-book or is it specific to the previewer?

Myriad-Dreamin commented 2 months ago

Just out of curiosity, does this also applies to other renderers like shiroa / typst-book or is it specific to the previewer?

Yes. Besides. it can also be applied to official typst's svg export.

memeplex commented 2 months ago

it can also be applied to official typst's svg export.

the high performance way is to render and stored rendered bitmap of text line by line

Are these different methods? I mean the second one seems to be a raster one while the first is vectorial, will you apply both of them?

Myriad-Dreamin commented 2 months ago

it can also be applied to official typst's svg export.

the high performance way is to render and stored rendered bitmap of text line by line

Are these different methods? I mean the second one seems to be a raster one while the first is vectorial, will you apply both of them?

yeah, perhaps not quite applicable since it needs rerendering as you change resolution, which requires script from somewhere. typst.ts's svg export (for browsers) can do it while official might only apply partial idea, like just put glyphs to fixed fractional (n+m/3) positions

memeplex commented 2 months ago

And in any case (canvas and svg), say you map 40.45 to 40.33, are you sure that the underlying technology/platform will then map that to a subpixel and not just round it to 40?

memeplex commented 2 months ago

Another remark/question: by inspecting the output of the previewer in VSCode and the rendering of the Shiroa guide, I see that text is contained in little spans and divs as proper text, not lower level objects (be them rasters or drawing primitives). Shouldn't that be already enabling the text-rendering capabilities of the platform (antialiasing, subpixel rendering, hinting, etc, as appropriate for that platform, e.g. in a retina screen probably all disabled)? Isn't that enough or even preferable?

For example, taken from the VSCode previewer:

<g transform="scale(16,-16)">
  <foreignObject x="0" y="-55.88" width="808.56" height="62.50">
    <h5:div class="tsel" style="font-size: 62px">
       según los valores de una o más
   </h5:div>
  </foreignObject>
</g>

The spec of foreignObject states:

The ‘foreignObject’ element allows for inclusion of elements in a non-SVG namespace which is rendered within a region of the SVG graphic using other user agent processes.

So this text is rendered "using other user agent processes", presumably the same than all regular text is rendered, which should be already appropriate to the platform (and, if not, it's user-level business anyway, not typst.ts').

That said, the scaling and translation may be playing bad with text rendering, I guess this depends on whether the text renderer sees the final coordinate system or one that is yet to be transformed at the raster level.

Later in the document it's stated that:

It is expected that commercial Web browsers will support the ability for SVG to embed CSS-formatted HTML and also MathML content, with the rendered content subject to transformations and compositing defined in the SVG fragment.

which seems to suggest that transformations are applied after the rasterization. If that's the case, perhaps one should be very careful with said transformations, but a simple test in Chrome suggests that this is not a problem:

<svg>
<g transform="scale(5,5)">
  <foreignObject x="0" y="0" width="300" height="200">
    foobar
  </foreignObject>
</g>
</svg>
<br/>
<svg>
    <g transform="scale(1,1)">
      <foreignObject x="0" y="0" width="300" height="200">
        foobar
      </foreignObject>
    </g>
</svg>
image

There is no pixelation whatsoever here, the text renderer is seemingly working at the right coordinate system.

Enter-tainer commented 2 months ago

N + {0, 1/3, or 2/3} px.

the approach mentioned in warp's blog is interesting. i wonder how it compares to pixglyph's method.

it's a bit unfortunate that svg renderers doesnt have out-of-box subpixel rendering

memeplex commented 2 months ago

that svg renderers doesnt have out-of-box subpixel rendering

But is this true when you use foreignObjects (divs and spans with text inside) that, as per the quote from the spec in my previous comment are:

rendered "using other user agent processes"

I think they are rendered using just the regular text rendering routines of the browser, that should support OOB appropriate optimization for the platform.

Enter-tainer commented 2 months ago

@memeplex I think there is some misunderstandings. In tinymist's preview, foreignObjects are only used for text selection but not text display. The foreignObjects themselves are transparent.

memeplex commented 2 months ago

Ok, I see. And is it possible to use them for rendering too or they don't give enough control over the output?

Enter-tainer commented 2 months ago

they don't give enough control over the output?

exactly.

hooyuser commented 2 months ago

I am thrilled to see recent efforts to improve font rendering quality! Given that we are considering taking full control over the rasterization process of glyphs, I am curious whether we could implement subpixel rendering. Following my post in Enter-tainer/typst-preview#294, I did some research and discovered that subpixel rendering significantly benefits displays with low resolution and PPI. Here is a screenshot comparison of text rendering with and without subpixel rendering: on the left is Adobe Acrobat with subpixel rendering, and on the right is SumatraPDF without it.

adobe_sumatra

On my 1080p screen, the math formula $G$ and $f^* {G}$ on the right appears noticeably more jagged.

If zooming in, it is clear that the left side utilizes subpixel rendering, whereas the right does not.

I have reviewed the linked blog post. From my understanding, it enhances text sharpness by aligning the texels of a texture with screen pixels, thereby avoiding any GPU color interpolation during texture sampling. For instance, GPU trilinear interpolation will blend 8 texels from two mipmap levels of a texture. The process of "Bezier curve -> rasterization bitmap -> interpolated bitmap" introduces a second pass of sampling inaccurate bitmap, which can lead to greater imprecision. WARP mitigates aliasing by rasterizing with subpixel kerning directly from the original Bezier curve.

Since WARP is a text editor, it makes sense to only offer a finite number of font sizes, which allows directly copying textures from the atlas without any GPU resampling. Perhaps we could also consider snapping the possible canvas scaling factors to enable one-to-one texture copying?

WARP is now targeted for macOS with retina screens that do not suffer from low PPI issues. For Tinymist, I believe adding subpixel rendering, as demonstrated here, could be highly beneficial for users on general platforms with various display devices. The most straightforward implementation method might involve using WebGL and writing some GLSL code similar to:

r = texture(sampler, vTexCoor + offset.r);
g = texture(sampler, vTexCoor + offset.g);
b = texture(sampler, vTexCoor + offset.b);
out_color = vec4(r, g, b, 1.0);

as exemplified in theta. Would the integration of WebGL involve significant changes, or are there simpler approaches to achieve subpixel rendering?

Myriad-Dreamin commented 2 months ago

webgl-based approach is possible. But we may also utilize canvas directly for early development. For rescale issue, if you don't even scale it then there is no problem. Otherwise, we can still switch to simply css transform when scaling factor is high enough.

hooyuser commented 2 months ago

webgl-based approach is possible. But we may also utilize canvas directly for early development. For rescale issue, if you don't even scale it then there is no problem. Otherwise, we can still switch to simply css transform when scaling factor is high enough.

From my experience, I often find myself adjusting the canvas size and zoom level. When scrolling, I like to zoom out, and when writing, I prefer to zoom in to hide the page margins. I also sometimes adjust the width of the code editor to avoid line breaks when editing tables.

Since the rendering quality issues only occur with small (but not too small) text sizes, perhaps we can preset a few specific small sizes and snap to the nearest preset. For extremely small or large font sizes, the current font rendering works fine. With fixed pixel sizes, I believe WebGL/WebGPU isn't necessary; instead, we can utilize a dedicated rasterizer to generate atlas with subpixel anti-aliasing. For rendering, I imagine it wouldn't involve any texture sampling—just copying a region of the font atlas onto the canvas, pixel by pixel, which avoids any secondary aliasing caused by upsampling or downsampling.

In case some users require a specific zoom level out of some reason, we can also allow them to manually specify a scaling factor. This way, we can fairly generate an atlas tailored to their specific needs.

Myriad-Dreamin commented 3 weeks ago

The experiment is archived at https://github.com/Myriad-Dreamin/typst.ts/commit/71d5891eeb3095fe0e6876bda283445e3b12c6cf. It is discontinued because the implemented manual text rendering is not perfect and costs too much.