onivim / oni

Oni: Modern Modal Editing - powered by Neovim
https://www.onivim.io
MIT License
11.35k stars 301 forks source link

[UI] Bold fonts slightly cut off #1252

Open rococode opened 6 years ago

rococode commented 6 years ago

It seems that bold characters get slightly cut off: in the image above you can see the m at the end of from looks like it's missing a lil bit at the end.

This is the same font and color style in an IDE.

Strangely enough, when the letter is highlighted by the cursor in oni there's no problem :

bryphe commented 6 years ago

Thanks for logging the issue, @rococode ! Looks like it is potentially related to #1083 and #878

akinsho commented 6 years ago

@bryphe would the solutions you suggested in #1203 re re rendering the whole canvas or using the drawImage method to avoid subpixel rendering wonkiness be relevant to this, personally have 0 clue how to use drawImage to render text instead of fillText but i'd say this is one of my biggest pain points atm so very motivated to find a solution, as the clipping I believe also impacts nerd fonts and wide unicode chars which I have littered all over the place in my vim config.

bryphe commented 6 years ago

This is a tricky one, @Akin909 ! Unfortunately I don't know what the best solution would be here - still a lot of unknowns. But some different possibilities:

The full redraw would be a fix for sure, but it would also be a huge blow to perf. An alternative option would be to re-render a bigger chunk when the background color changes - if the background color changes, we could dirty all the adjacent tokens. This would still be expensive, but hopefully less expensive.

One other, more exotic possibility I was thinking about - is we skip the canvas for rendering the alternate backgrounds. We use it for the 'common case' of the default background, but for elements with a non-default background (which is I believe should be less common), we render that via DOM elements overlaid. Then, we would never have to use fillRect for whitespace, and we could clear text by drawing over it with fillText (or drawImage). We'd have to essentially diff the previous state and the new state, though.

Yet another possibility - we split the clearing and the drawing text into two separate phases. When we go to the drawing text phase, we draw the tokens for all the regions we cleared and the tokens for the regions before (but only the text - not the background). This would 'clean up' parts of the font that would've been cleared. This approach also seems like it could be more testable - we'd essentially do a diff of the previous screen state and the new screen state, and get the set of regions to 'clear' and the tokens to 'redraw', and then apply that to the canvas. We could test the 'diff' layer separately from the actual application to the canvas.

I wish I knew the best fix for it... but there is definitely still some investigation needed 😄 Definitely been a thorn in our side w/ the canvas rendering. Really appreciate the help though... Hopefully that helps give you some ideas to try, though!

akinsho commented 6 years ago

@bryphe once I start to clear out my backlog of PRs was gonna start looking at this again as it remains my biggest bug bear just wanted to check in that the above is still valid re where to start re fixing this issue.

I'm leaning towards this potential solution currently (since it seens most minimal):

We could look at a smarter algorithm for selecting when to use Math.floor and Math.ceiling when rounding the rectangles we use for fillRect to paint the background.

bryphe commented 6 years ago

@bryphe once I start to clear out my backlog of PRs was gonna start looking at this again as it remains my biggest bug bear

Awesome!

just wanted to check in that the above is still valid re where to start re fixing this issue.

Yes, although I'm not sure it will be entirely solvable by just tweaking Math.floor and Math.ceiling. The problem is, as long as we are doing sub-pixel rendering (meaning font characters can have a fractional width, like 7.58px), there are always going to be cases where clearing the rectangle doesn't match up completely with the boundary of the letter. So I think that perhaps a more robust solution like:

Yet another possibility - we split the clearing and the drawing text into two separate phases. When we go to the drawing text phase, we draw the tokens for all the regions we cleared and the tokens for the regions before (but only the text - not the background). This would 'clean up' parts of the font that would've been cleared. This approach also seems like it could be more testable - we'd essentially do a diff of the previous screen state and the new screen state, and get the set of regions to 'clear' and the tokens to 'redraw', and then apply that to the canvas. We could test the 'diff' layer separately from the actual application to the canvas.

might be the way to approach it. It also means that we can add more testability to this layer, which is important.

manu-unter commented 6 years ago

Hey folks!

Using drawImage instead of fillRect might help, if the anti-aliasing behavior is different. We could create a 1-pixel image for the color we want (in a separate canvas), and then use drawImage instead of fillRect to clear the color. Worth prototyping to see - and it'd be worth trying with both ctx.imageSmoothEnabled set to true and false.

I just tried this out, unfortunately without success. The visual result is exactly the same, regardless of the fact that we are now drawing an image, even withimageSmoothingEnabled set to true. The image smoothing doesn't seem to affect the drawing borders, just the actual content of the drawn image.

Fun fact: image smoothing seems to operate with the complete original image as input, because a 1x1 image was not enough to create a solid background with smoothing enabled. Instead, it created a slight gradient:

screen shot 2018-04-02 at 11 32 21

I bumped the size to 10x10 and only used the center pixel, which then gave me a solid background color again.

Bottom line: drawImage won't help us :(

Regarding this approach

We could look at a smarter algorithm for selecting when to use Math.floor and Math.ceiling when rounding the rectangles we use for fillRect to paint the background.

I am not too optimistic that there would be a reasonably simple approach to this. If I understood your idea correctly, each rendered piece of text would need to decide if it rounds each of its background bounds up or down given its content and the decisions of the spans above (which might be multiple ones) and to the left. Maybe I'm missing a much simpler approach here, but this seems way too complex to me.

One other, more exotic possibility I was thinking about - is we skip the canvas for rendering the alternate backgrounds. We use it for the 'common case' of the default background, but for elements with a non-default background (which is I believe should be less common), we render that via DOM elements overlaid. Then, we would never have to use fillRect for whitespace, and we could clear text by drawing over it with fillText (or drawImage). We'd have to essentially diff the previous state and the new state, though.

We are already doing this kind of stuff in several places with e.g. the error squiggles, right? While it also sounds unconventional to me, maybe this is a reasonable compromise given the assumption that there won't be too many non-default background areas. That would require however that we remove all fillRects from the canvas renderer so that we can place background spans behind the text, right? I don't know how large the effort for extracting the background color information to a different service would be, however. Also I had some delays between scrolling and the error squiggles following behind before (I'm not sure however if this is due to the separation of canvas and DOM or because of a slow language server), these kinds of artifacts could possibly also happen here then.

Yet another possibility - we split the clearing and the drawing text into two separate phases. When we go to the drawing text phase, we draw the tokens for all the regions we cleared and the tokens for the regions before (but only the text - not the background). This would 'clean up' parts of the font that would've been cleared. This approach also seems like it could be more testable - we'd essentially do a diff of the previous screen state and the new screen state, and get the set of regions to 'clear' and the tokens to 'redraw', and then apply that to the canvas. We could test the 'diff' layer separately from the actual application to the canvas.

This might also be a feasible approach, even though I'm not sure if it would help us with #1083 since we still seem to have some bleeding of backgrounds after clearing them again. In any case, if we want to make sure that characters from the lines before and after the rendered lines are not truncated like the characters to the left are now (No idea if that is a realistic problem), we might need to redraw these characters as well. The approach would at least leave the renderer interface as it is today and only increase complexity inside the renderer itself.

I am unsure if the DOM-based backgrounds approach or the conservative re-drawing approach would be faring better in terms of performance. While manipulating the DOM is always an expensive operation, this would only happen in cases when there are non-default background cells involved. On the other hand, canvas operations might be cheap, but the conservative re-drawing of surrounding elements would need to happen on each and every redraw. Then again, there shouldn't be a perceptible difference between redrawing parts of the screen and redrawing what is essentially the parts that should be redawn and each adjacent cell.

How do you guys think we should proceed here?

bryphe commented 6 years ago

Bottom line: drawImage won't help us :(

Great to know we can rule that out! 👍 Thanks for the investigation @Cryza !

I am not too optimistic that there would be a reasonably simple approach to this. If I understood your idea correctly, each rendered piece of text would need to decide if it rounds each of its background bounds up or down given its content and the decisions of the spans above (which might be multiple ones) and to the left. Maybe I'm missing a much simpler approach here, but this seems way too complex to me.

Thanks for your thoughts here! I agree, I always hoped that we could add a couple of Math.floor or Math.round, but I came to the same conclusion too - it's complex and I don't think it could be generalized to solve the problem completely.

That would require however that we remove all fillRects from the canvas renderer so that we can place background spans behind the text, right? I don't know how large the effort for extracting the background color information to a different service would be, however. Also I had some delays between scrolling and the error squiggles following behind before (I'm not sure however if this is due to the separation of canvas and DOM or because of a slow language server), these kinds of artifacts could possibly also happen here then.

Regarding this - yep, we'd remove the fillRects from this strategy. My thinking with this was that the CanvasRenderer would still manage the DOM here (so we wouldn't delegate it to another service) - we might want to rename it to a HybridRenderer or something though, since it would be using a mix of canvas/DOM to render the viewport (or we could implement it as two separate renderers, with the DOM renderer overlaid on top of the canvas renderer!).

If the Renderer managed these DOM elements, we wouldn't experience the set of delays we have with scrolling / squiggles, luckily. We'd also have all the context we'd need to render these elements. If it's a minimal set of elements we're rendering, and we're careful with the attributes (use transform and position to avoid layout, use contain to control the impact on paint/layout, etc), it might not be a big perf hit - certainly much less than the previous DOM renderer strategy, because that had a pretty huge amount of DOM elements.

One other approach to add to the list would be to push even lower in the stack like the Xray - they are using a WebGL-based approach to have complete control over the text rendering pipeline. Will be interesting to keep an eye on their progress: https://github.com/atom/xray/tree/master/xray_electron/lib/render_process/text_editor There are a lot of challenges here - for example, handling international fonts and ligatures... but my hope is maybe they can build this into a reusable component that we could plug in eventually 👍 It's a lot of work but really cool!

With the conservative re-drawing approach, my thinking is that this could also be an opportunity to streamline / refactor our CanvasRenderer a bit (which is a bit unwieldy at the moment).

It'd be cool if we could split the reconciliation step out from the renderer - where the reconciliation step is a function: (oldState, newState) => { regionsToClear[], textRegions[] }. Basically it does a 'diff' of the old state and the new state, and gives us the minimal set of regions to render, accounting for potential sub-pixel overlaps.

The regionsToClear would be an array of rectangles + background color. The textRegions would include the position of the token (x,y), the color, font info (bold/italic), and the token itself. The CanvasRenderer would take the result of that and render in two passes - first, applying the clear regions, and then second, rendering all the tokens.

The way this could solve the subpixel rendering problem is by accounting for cases where there is an overlap (where different background colors meet between text tokens), and including that in the output of that reconciliation function. This would be a nice refactoring, since it could make the CanvasRenderer a bit simpler and push some of the complexity to that reconciliation algorithm. We could probably improve a lot the way we gather our text 'regions', too.

So I guess to summarize there are 3 potential options to look at next:

I think the first two are the most realistic to pursue next. To me, the DOM + canvas seems like it might be the easiest one to pursue:

(and the first two aren't necessarily mutually exclusive... you could imagine our 'reconciliation function' - (oldState, newState) => { regionsToClear[], textRegions[] } - could actually be used by both the canvas/'dom highlight' renderer.)

Hope that at least helps give some ideas... Thanks for all the thoughts here, @Cryza !

manu-unter commented 6 years ago

Thanks a lot for the input!

I am pretty sure that I fully understood our problem now, I'll try to summarize it:

Now let's have a look at the approaches for solving this again

Hybrid rendering - DOM + canvas

With the above explanation of the actual problem, I now think that this would in fact not solve our problem. We would only replace fillRect withclearRect because we would still need to get rid of the previously rendered tokens, which leaves us with the same problem as before. That means the hybrid approach would only work in combination with the improved reconciliation algorithm. Bummer!

Improved reconciliation algorithm - Canvas

Like stated above, I am convinced now that if we want to stick to only redrawing parts of the canvas, we need to improve our reconciliation algorithm so that tokens at the boundaries of updated regions are redrawn after the cells were cleared/filled.

WebGL

Xray definitely looks like a project we should keep an eye on! From a quick look at their renderer, it looks like they do a complete redraw everytime, which means they do not have the problem we are currently facing. Apparently, the WebGL pipeline is fast enough so that they can afford that. Maybe it would actually be worth looking into integrating a WebGL renderer in Oni as well if we can get rid of the artifacts, possibly also improving performance on the way. Off the top of my head, I can't really think of a reason why this shouldn't also work for us.

I hope this helps us find the right way of solving this! Out of curiosity and because I think it could really make things easier for us, I will try to adopt the WebGL strategy for Oni now. If this ends up being at least as performant as our current approach, we could ditch the reconciliation and simply redraw all the time.

I'll keep you posted on my progress!

akinsho commented 6 years ago

@Cryza Thanks for the summaries and input tbh feel like this was vaguely floating around in my head but it's nice the details laid out in this form.

Re-strategies I second avoiding the hybrid as it actually seems like the most bulky solution proposed, and by making our rendering process a hybrid it might also make it harder to follow as time goes on with potentially some bits of rendering in the DOM some not 🤷‍♂️ also it seems to still be dependent on the second solution so is necessarily two in one.

V. in love with the idea of using WebGL (which is what actually attracted me to x-ray primarily), I think it'd really be such a huge pro for oni, not sure how complex that would be to implement at all but if you need a hand at some point (not familiar with WebGL but its v. high on my list of things I plan on delving into)

PS: at worst we can always look into the improved reconciliation if it proves too complex/ or unfeasible for any reason

bryphe commented 6 years ago

Yes, excellent summary @Cryza ! 👍

I hope this helps us find the right way of solving this! Out of curiosity and because I think it could really make things easier for us, I will try to adopt the WebGL strategy for Oni now. If this ends up being at least as performant as our current approach, we could ditch the reconciliation and simply redraw all the time.

This sounds great! Totally on-board - I think WebGL is the right long-term approach for Oni's editor surface 👍 It gives us complete control over the rendering behavior and performance characteristics.

From a quick look at their renderer, it looks like they do a complete redraw everytime, which means they do not have the problem we are currently facing. Apparently, the WebGL pipeline is fast enough so that they can afford that.

Definitely, this saves a ton of complexity - makes the current CanvasRenderer really confusing (the way it saves state about the previous state). It lets us side-step the reconciliation issue entirely!

We could implement this side-by-side with the existing editor (sort of like what we had with the DOM renderer), where we had a configuration setting for editor.renderer (canvas | webgl). We could keep canvas as the default, but it would give people a chance to see progress by flipping over to the webgl strategy. (Under the hood, both the DOMRenderer and CanvasRenderer implemented INeovimRenderer - making it easy to flip between them).

Another thing to think about is how we can split up the tasks, or have small milestones. Just off the top of my head, something like:

Getting that first two bullet points going are the big hurdles - then it will be easier to plug-in and incrementally improve it. Might even be worth splitting up into separate issues.

But I'm stoked about this, I would ❤️ to have a WebGL strategy for our editor surface, and have been wanting to learn more about it too. Thanks @Cryza ! 💯

dlee commented 6 years ago

I just wanted to point out that italic text also gets chopped off in certain situations:

screen shot 2018-04-03 at 5 33 52 pm

but not in others:

screen shot 2018-04-03 at 5 33 25 pm
manu-unter commented 6 years ago

Status report:

Initial WebGL implementation of INeovimRenderer - draw a colored quad - validates the basic WebGL rendering pipeline.

Can be considered done - I took most of the WebGL code from Xray, adapted it and got a text to render :)

Render white text on a black background, basic font - proof of concept that it works / can start checking performance, etc!

I already have white text on a black background, but the renderer API is not yet connected to the INeovimRenderer which is why I am only rendering dummy text so far. I will connect the APIs now and after that, we can start doing performance tests. I think I could use some advice from you, @bryphe, in terms of how exactly I should do this so that we get meaningful measurements.

Regarding the upcoming steps, some thoughts that I have so far: Rendering text using WebGL is usually done by taking the rendered representation of a character from an atlas (sprite) and putting that into the WebGL canvas at the right position. The atlas can also be generated dynamically, using a 2d canvas element and taking its result as a texture. That is exactly what we are doing now. This complications our styling options, however:

Implement colored text

This could generally be done in two ways: creating one representation of the character in the atlas per used color or rendering every letter in white and coloring them in the shader by multiplying with the color to be used. I would strongly favor the latter and I have also already found an implementation of this approach online (https://webglfundamentals.org/webgl/lessons/webgl-text-texture.html)

Implement colored background

I am pretty sure we can take most of the existing code for the selections from Xray for doing this, so this shouldn't be a problem :)

Implement bold Implement italic

I think this will boil down to adding bold and italic versions of each character to the atlas when they are required.

Implement ligatures (?)

The Xray team has tried out using a full-blown text shaping library called HarfBuzz for supporting ligatures, text direction changes and arabic and other special characters. It turned out to be too slow for their 8ms budget of rendering time. What is our current performance here? Maybe we would make a different decision? (Source: https://github.com/atom/xray/blob/master/docs/updates/2018_03_05.md)

What the Xray team is currently planning to do is falling back to HTML-based rendering for complex characters and non-standard text directions and also implementing some custom ligature handling for regular font ligatures (e.g. FiraCode).

Since this is arguably the last priority in this issue, I think we can just wait a little and reconsider this topic as soon as we have gathered some performance data.

On a general note regarding the atlas: Currently, we add each character that is not yet in the atlas to it on the fly. That means for some cases, it would at some point be full and we run out of space for adding another character. I am not sure yet if we can expand the atlas on the fly or if we have to find a completely different strategy here. Just as a heads-up :)

bryphe commented 6 years ago

Amazing progress, @Cryza !

I think I could use some advice from you, @bryphe, in terms of how exactly I should do this so that we get meaningful measurements.

I actually haven't done any WebGL profiling in the browser before. Back when I did DirectX stuff I focused a lot on the CPU work (that offloads stuff to the GPU), and fillrate (pixel shader) perf... but not sure if that even still applies. I think at least to start, we could see what the performance profiling tools give us in Chrome. I'm happy to help take a look at this when you have a prototype, too!

Rendering text using WebGL is usually done by taking the rendered representation of a character from an atlas (sprite) and putting that into the WebGL canvas at the right position. The atlas can also be generated dynamically, using a 2d canvas element and taking its result as a texture. That is exactly what we are doing now. This complications our styling options, however:

Very cool, this makes sense.

This could generally be done in two ways: creating one representation of the character in the atlas per used color or rendering every letter in white and coloring them in the shader by multiplying with the color to be used. I would strongly favor the latter and I have also already found an implementation of this approach online (https://webglfundamentals.org/webgl/lessons/webgl-text-texture.html)

I like the latter approach too! Seems much easier on memory 👍 Maybe even open the door for some interesting shader lighting effects...

The plan for bold / italic makes sense, too.

What the Xray team is currently planning to do is falling back to HTML-based rendering for complex characters and non-standard text directions and also implementing some custom ligature handling for regular font ligatures (e.g. FiraCode).

Ah, this is cool! Seems like a good approach to me.

Since this is arguably the last priority in this issue, I think we can just wait a little and reconsider this topic as soon as we have gathered some performance data.

Agree 100%!

On a general note regarding the atlas: Currently, we add each character that is not yet in the atlas to it on the fly. That means for some cases, it would at some point be full and we run out of space for adding another character. I am not sure yet if we can expand the atlas on the fly or if we have to find a completely different strategy here. Just as a heads-up :)

Seems reasonable to me. Is there a limitation to this size of our character atlas? Or can we create multiple character atlases? Like, would it be possible to send an index along with the character data saying "this character is on atlas 0, this character is on atlas 1, etc"?

Awesome stuff @Cryza ! Can't wait to see a prototype! 💯 🎉

bfulop commented 6 years ago

Just wanted to mention here that it took Sublime Text about 5 years to add font ligatures support. Wish I could find their post about it, it was quite detailed, but in short it required a change in their architecture to implement it (because I think the fonts were no longer monospaced).

Also, some fonts don't support ligatures at all (again, I'll post here if I find the article) because the same sequence of characters can mean different things across programming languages and thus should be represented by different ligatures.

So, while most people "absolutely need" ligatures, I favour bragging about having WebGL powered text rendering in my code editor! 👍😎

manu-unter commented 6 years ago

Thanks for the insights, @bfulop ! I agree on the priorities here. It is much more desirable to have a performant editor than to support font ligatures. The fact that we should pay attention to the character widths if we ever want to support ligatures is a very good hint! I think we are somewhat in a different situation though, since our character layout will always be based on terminal-style cells provided by neovim (at least if I understood that correctly). That means we should make sure that we are able to render characters that span more than one cell in their width, but we won't need to be able to support full variable-width fonts, since that contradicts our underlying concept of a grid.

A quick update on my progress with the WebGL renderer:

I have to mention here that @nathansobo was extremely helpful in explaining to me what they did in their WebGL pipeline for Xray, especially the subpixel-rendering compatible shaders would have been impossible to understand without his help :)

I'll unfortunately not find the time to continue working on the renderer during this week, but I'm hoping to be able to open a PR for the first working version of a WebGL-renderer by the end of next week.

bryphe commented 6 years ago

Sounds like incredible progress, @Cryza ! I agree with @bfulop , I'm looking forward to bragging about using a WebGL powered text editor too 😄

I also managed to get greyscale-based font anti-aliasing to work (the stuff that canvas does with { alpha: true }), but going to full subpixel-based anti-aliasing will require some more work on the background rendering, since transparency and subpixel-antialiasing can't both be done completely correctly with RGBA

If we make the constraint that the background is always solid color, and eliminate transparency, does that simplify the subpixel antialiasing? I think that's fine for the initial implementation.

The way we render through the INeovimRenderer is disadvantageous to the WebGL pipeline and currently produces too many reinitializations (or maybe I just haven't found the right abstraction yet)

We can definitely tweak this if we a better abstraction would fit.

The mapping between CSS-compatible color strings and WebGL-compatible RGBA values currently takes about 99% of the rendering time, I'll have to write a caching mechanism or something similar for that.

Sweet, this sounds like it will at least be straightforward to cache, and get a big perf boost! Then I guess its onto the next bottleneck 😄

I'll unfortunately not find the time to continue working on the renderer during this week, but I'm hoping to be able to open a PR for the first working version of a WebGL-renderer by the end of next week.

Can't wait to see it 👍 Thanks for the update and all your work on this, @Cryza ! Excited to try it out.

manu-unter commented 6 years ago

Small update since I managed to do a little of work yesterday:

I will now put more focus on the background rendering, especially making it work correctly with subpixel antialiasing; I feel like this is a very important feature for people working on non-retina displays :)

If we make the constraint that the background is always solid color, and eliminate transparency, does that simplify the subpixel antialiasing? I think that's fine for the initial implementation.

That would in fact be necessary to perform subpixel antialiasing in any case, so I think we will in the long run need to offer two configurations (Which is also what canvas does with the alpha option, and now I also understand why):

On the topic of performance: If anyone can give me some hints on how to use the performance devtools in electron in a meaningful way or how to use other methods of finding performance bottlenecks, I would be really grateful! :) Once I get the background stuff and possibly also bold and italic fonts to work, I will focus on cleaning up and optimizing the performance.

CrossR commented 6 years ago

I'll assume you've seen this page @Cryza but I'll link it just in case : https://github.com/onivim/oni/wiki/Performance-Profiling.

Oh and don't feel like every thing needs to be done in one go, having a PR/merging a minimal experimental version could be useful to get a few other people testing it, whilst you make additional PRs to expand it. (It also lets us be nosey and see the new fancy renderer!)

manu-unter commented 6 years ago

@CrossR Thanks for the link, I hadn't seen that part of the wiki before in fact :)

You are right about doing everything in smaller steps. I'll try to find the time for some cleanup and refactoring of the current state and then open a PR during the weekend!

bryphe commented 6 years ago

You are right about doing everything in smaller steps. I'll try to find the time for some cleanup and refactoring of the current state and then open a PR during the weekend!

Sweet! That sounds great - I can't wait to try it out! +1 to having a minimal experimental version, even if its just barebones right now, it'll be fun to see it progress and to start learning from the code. Thanks for the update @Cryza !