Closed tobia closed 6 years ago
kitty does not support sub-pixel rendering. Supporting it would require both a performance hit (all cached glyph images would become three times larger and alpha blending would require three separate per-channel calculations) and also a fair bit of code complication. Given that on newer high DPI displays, there is not much point in sub-pixel rendering anyway, I am not interested in implementing it.
While I agree that it's not needed on newer hidpi displays, I would argue that most people still use their "older" monitors that are probably not hidpi. Would you be opposed to sub-pixel rendering being an optional feature either chosen in run-time or compile-time?
In order to do sub-pixel rendering one would have to render to an offscreen buffer first (in order to do correct alpha blending), this has significant performance implications and code complication implications. As such I'm not really in favor of it. But you are welcome to send a PR and I will review it, and might merge it if it has no negative performance implications when disabled and reasonable code complication overhead.
As the OP, I have to say I agree with @kovidgoyal. Sub-pixel anti-aliasing would substantially lower Kitty's rendering performance. Also, if you have a low-res screen, there's less reason to use an OpenGL-based terminal emulator. Traditional rendering techniques were and still are good enough.
In fact, on low-res screens I still prefer uRxvt with my favorite hand-drawn bitmap font, GohuFont 14px, no anti-aliasing at all. It looks so much sharper than any kind of anti-aliasing. (If you have a mid-res screen and 14px is too tiny, I recommend the fixed 9x18 font from the standard X11 font package.)
I would agree with you but I think there are a lot of people out there who have a hidpi laptop but connect the laptop to a non-hidpi monitor. I'd like to be able to use the same terminal seamlessly wherever I am without having to switch.
But of course, if the performance hit is as drastic as you say; I can see why someone wouldn't want it to be added.
Pity. This is a great terminal emulator, but fonts look really ugly on my 1920x1080 15 inch.
Yes, it's a pity. Looks like a very nice project, but the font rendering is awful compared to urxvt on my setup (Arch, dual Dell 27" 2560x1440, Deja Vu Sans Mono at 11pt). But it's hard to argue with the reasons outlined by @kovidgoyal. Will keep an eye on the project, keep up the great work!
Maybe it’s years of using a Mac, but I love the look of “thicker” fonts. I, too, am struggling with the awesomeness of Kitty with the ugliness of my fonts (Pragmata Pro). Every time I think I’ve come to grips with it, I accidentally open up iTerm and realize that I’m kidding myself - I just prefer the sub-pixel aliasing. I must be in the minority though as I never understood why iTerm and other programs (VS Code) give options to disable it. Nonetheless, thanks for a fantastic terminal! Your level of support is outstanding!
I can totally appreciate that, once you get used to a certain look, it is very hard to adapt to a change. I have this issue all the time when I am forced to work temporarily on third party computers, I just cant get used to how the fonts look :)
As I said, I am willing to merge a patch that has no overhead when disabled and reasonable levels of code complication.
I attempted this by tweaking the render_glyphs
function: https://github.com/kovidgoyal/kitty/blob/master/kitty/core_text.m#L364-L365
I tried various combinations of function calls:
CGContextSetShouldSmoothFonts
CGContextSetAllowsFontSmoothing
CGContextSetAllowsAntialiasing
CGContextSetAllowsFontSubpixelQuantization
CGContextSetAllowsFontSubpixelPositioning
CGContextSetShouldSubpixelQuantizeFonts
CGContextSetShouldAntialias
However, disabling and enabling a variety of these CoreText APIs had no effect.
I did more research into how terminal emulators like iTerm2 and Alacritty did "thin stroke" rendering. Both of those emulators have options to enable and disable thin strokes. It turns out that there is a semi-private (?) smoothing style API:
extern void CGContextSetFontSmoothingStyle(CGContextRef, int);
extern int CGContextGetFontSmoothingStyle(CGContextRef);
This API is also used in WebKit, which is how people found out about it.
Passing 16
(2 << 3
) into CGContextSetFontSmoothingStyle
would enable thin strokes. But this isn't useful to us as the strokes are already thin.
I tried some other numbers to pass into the function, but they didn't appear to revert back to the thick strokes.
Kitty's font rendering mechanics might be forced to thin strokes because of how the views are backed, maybe? Or how Kitty works at its core? It doesn't seem that tweaking the CoreGraphics context may help, but maybe I just didn't try hard enough...
I hope this helps!
Yeah tweaking coregraphics parameters will have no effect. Unlike most (all?) other terminals kitty achieves its performance (partly) by storing alpha masks of each character on the GPU. So every character is rendered only once. A render in kitty just means sending the indices for each apha mask to the GPU. This alpha mask is then blended with the foreground color, background color, and any negative z-index graphics and drawn to screen, all on the GPU.
So to enable sub-pixel rendering, you have to first:
1) Generate three-channel alpha masks from the font drawing library (CoreText or FreeType) 2) Render the background and any negative z-index graphics to an offscreen buffer (an FBO in OpenGl parlance) 3) Blend the three channels onto that buffer, with gamma correction 4) Render positive z-index graphics on top 5) Done
Just FYI, Apparently Apple agrees with me, they are removing sub-pixel antialias completely in 10.14 https://infinitediaries.net/removed-in-macos-10-14-mojave/
@karambaq Shouldn't that comment be directed at Apple, not me? In any case, just to note, the procedure I described above for adding sub-pixel rendering to kitty would work regardless of any changes Apple makes. So if you love sub-pixel rendering so much, feel free to implement it.
Probably not the right place to ask, but I'm not sure where to ask. Do you guys have any suggestions what to use instead for external monitors?
I've been playing with kitty, but now that I'm looking at it on my 27" monitor, I just don't feel like I could get used to it. It looks so pretty on the retina display, but so ugly on the monitor :(
@darthdeus looks great on my high dpi monitor.
@darthdeus See my comment above (6th from the top)
I have a patch for Mac that renders glyphs using subpixel antialiasing on a black background in 24-bit color, then averages the color values to create an alpha mask. It looks pretty good, especially on dark colorschemes, but it's not real subpixel antialiasing, and I'm not sure if I know enough to implement three-channel blending on the GPU.
diff --git a/kitty/core_text.m b/kitty/core_text.m
index 6a369900..e752b9fc 100644
--- a/kitty/core_text.m
+++ b/kitty/core_text.m
@@ -356,9 +356,9 @@ render_color_glyph(CTFontRef font, uint8_t *buf, int glyph_id, unsigned int widt
static inline void
ensure_render_space(size_t width, size_t height) {
- if (render_buf_sz >= width * height) return;
+ if (render_buf_sz >= width * height * 4) return;
free(render_buf);
- render_buf_sz = width * height;
+ render_buf_sz = width * height * 4;
render_buf = malloc(render_buf_sz);
if (render_buf == NULL) fatal("Out of memory");
}
@@ -366,16 +366,23 @@ ensure_render_space(size_t width, size_t height) {
static inline void
render_glyphs(CTFontRef font, unsigned int width, unsigned int height, unsigned int baseline, unsigned int num_glyphs) {
memset(render_buf, 0, render_buf_sz);
- CGColorSpaceRef gray_color_space = CGColorSpaceCreateDeviceGray();
- CGContextRef render_ctx = CGBitmapContextCreate(render_buf, width, height, 8, width, gray_color_space, (kCGBitmapAlphaInfoMask & kCGImageAlphaNone));
- if (render_ctx == NULL || gray_color_space == NULL) fatal("Out of memory");
+ CGColorSpaceRef color_space = CGColorSpaceCreateDeviceRGB();
+ CGContextRef render_ctx = CGBitmapContextCreate(render_buf, width, height, 8, 4 * width, color_space, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
+ if (render_ctx == NULL || color_space == NULL) fatal("Out of memory");
+ // render against a black background (rather than a transparent background)
+ CGContextSetRGBFillColor(render_ctx, 0, 0, 0, 1);
+ CGContextFillRect(render_ctx, CGRectMake(0, 0, width, height));
CGContextSetShouldAntialias(render_ctx, true);
CGContextSetShouldSmoothFonts(render_ctx, true);
- CGContextSetGrayFillColor(render_ctx, 1, 1); // white glyphs
+ CGContextSetRGBFillColor(render_ctx, 1, 1, 1, 1); // white glyphs
CGContextSetTextDrawingMode(render_ctx, kCGTextFill);
CGContextSetTextMatrix(render_ctx, CGAffineTransformIdentity);
CGContextSetTextPosition(render_ctx, 0, height - baseline);
CTFontDrawGlyphs(font, glyphs, positions, num_glyphs, render_ctx);
+ // convert to greyscale
+ for (size_t i = 0, j = 0; i < width*height; i++, j += 4) {
+ render_buf[i] = (render_buf[j] + render_buf[j+1] + render_buf[j+2]) / 3;
+ }
}
Investigating further, it seems almost impossible to have CoreText render a glyph with three alpha channels. Has anyone found a way to do this? @kovidgoyal
(interestingly, the top result when googling "coretext" three alpha channels
is this github issue)
No idea, sorry. I'm not a CoreText expert. II suspect that itis deliberate since to do proper sub-pixel rendering you have to know the background color. I assume Apple didn't bother with making such an API since if you know the background color already, it's easier to just render both in one pass.
@kovidgoyal Would you merge this patch under an option called something like fake_subpixel_antialiasing
?
Doesn't seem workable to me.
1) It is not sub-pixel antialiasing since subpixels are not actually being individually colored 2) I suspect it wont look good if your background color is not similar to the hard coded bg. Think of full screen applications that change background colors/ use menus/text boxes that are often light while the main background is dark. Given the architecture of kitty there is no way to adjust characters for different background colors on the CPU, only the GPU and that requires using alpha masks 3) It would probably make text look worse on retina displays
What you can do is use FreeType instead of CoreText to render. I know FreeType supports generating sub-pixel alpha masks. Of course, chances are you (by that I mean the metaphorical you) wont like the way text rendered with freetype will look
Hmm, maybe I could render the glyph in black on a red, green, and blue background, then extract and invert the red/green/blue channel to use as the alpha channel? Not 100% sure that would give the correct result.
Or maybe render the glyph in white over cyan/magenta/yellow, then just extract red/green/blue without inverting
Dunno, there are a lot of moving parts here: You'd have to extract the alpha masks using some sort of hack, then you'd have to change the OpenGL code to render to a framebuffer and change the shaders to to alpha blending of the framebuffer color and the foreground color using the alpha masks with propoer gamma correction. It will be very hard to say at which point in this pipeline things are going wrong. There's a reason Apple is dropping sub-pixel AA :)
That said, feel free to try, as I said before, I am willing to merge a patch that does all that, as long as it does not impact the no-subpixel AA use case.
It's also possible to do what the other GPU-accelerated terminals do, and key the glyph cache with the foreground and background color in addition to the glyph. That's probably a whole lot easier than implementing the whole subpixel antialiasing pipeline.
The main reason I want subpixel AA is because it effectively increases the weight of the font by about 1 pixel, and that makes a huge difference in legibility on the really small font sizes I like to use. The hack with rendering on a black background at least accomplishes this, so I'll likely end up using it as my daily driver.
That would be a huge increase in memory consumption for the glyph cache is you use colored text. It would probably increase cache size by an order of magnitude for anyone using applications such as vim. or colored ls etc. There is a reason kitty is so much less resource intensive than other GPU accelerated terminals. Not to mention that it wont work with the graphics support in kitty at all (remember text in kitty can be overlaid over graphics).
If you like thicker fonts, why not just use a font with a thicker stem size? It seems backwards to me to get the font rendering subsytem to jump through hoops just to thicken your fonts. Different typefaces have different design characteristics and IMO it should be the job of the rendering system to render them as accurately as possible. Otherwise font designers will never be able to rely on rendering systems to get rendering right.
If you like thicker fonts, why not just use a font with a thicker stem size?
Ligatures :)
Ah well if you cant find a font that is both thicker and has ligatures, feel free to hack away at the rendering system for your personal use, but I dont think such hacks are suitable for inclusion into kitty.
@kovidgoyal i totally understand your position on the matter, but please could you keep this issue open so someone else can implement this as an optional feature (off by default) ?
I tested kitty and is such a great project and i really would like to use it but without sub pixel anti-aliasing is a deal breaker for me on 1080p and i believe for many people too because the font render quality is just too low.
Perhaps another option is you consider adding a section on the readme to explain your position (which is totally understandable) but encourage anyone to code it as an optional flag in a way that doesn't change anything (performance/resource wise) when it's off ? Surely higher DPI is here but i think many people will still be stuck with 1080p for quite some time.
Sorry, I dont keep issues open that have no well defined path to implementation and that I dont think are a good idea. As I have already stated, i am willing to merge a patch that has no ill-effects for the non-subpixel use case, but that does not mean I actually think such a patch is a good idea.
I understand, thank you for your consideration.
The insane hack used to get subpixel antialiasing in the Metal renderer in iTerm 2 is documented here: https://docs.google.com/document/d/1vfBq6vg409Zky-IQ7ne-Yy7olPtVCl0dq3PG20E8KDs/edit#!
An interesting read, just goes to show how much cost is involved in sub-pixel rendering. Personally, I would have done it differently, using an FBO to render into so that the GPU can sample the background colors and alpha blend itself (the GPU can map the FBO as a texture in subsequent passes). It's a lot simpler to implement (and I suspect, though I cannot prove, be pretty performant as well on modern GPUs, since there is no back and forth between CPU and GPU to render). But that assumes you can get per-channel alpha masks like FreeType gives you. And there may well be unanticipated problems that I would have run into along the way.
@gnachman just pinging you in case you are interested.
@kovidgoyal I don't quite follow your proposal, but in my algorithm there is no "back and forth between CPU and GPU" after the glyph texture and color lookup tables are produced. Apple does not give you per-channel alpha masks, which is why this was such a silly mess.
I was not suggesting there was back and forth in your technique, I was just pointing out that there wasn't any in mine, either. Yeah, CoreText not creating per-channel masks is a big problem on macOS.
Another voice.. I love kitty on my macbook pro laptop monitor, but moving a window onto my external 27" display, the fonts look ragged and hard to read.
I'll follow the project in the mean time and hope for a solution.
Good luck!
I've figured out how to use kCGTextFillStroke to unconditionally add a bit of weight to the font, which is a good enough workaround for me. Here's the patch, in case anyone wants to use it. (Also includes a fix for leaked graphics context and color space.)
diff --git a/kitty/core_text.m b/kitty/core_text.m
index 6a369900..ea119560 100644
--- a/kitty/core_text.m
+++ b/kitty/core_text.m
@@ -372,10 +372,14 @@ render_glyphs(CTFontRef font, unsigned int width, unsigned int height, unsigned
CGContextSetShouldAntialias(render_ctx, true);
CGContextSetShouldSmoothFonts(render_ctx, true);
CGContextSetGrayFillColor(render_ctx, 1, 1); // white glyphs
- CGContextSetTextDrawingMode(render_ctx, kCGTextFill);
+ CGContextSetGrayStrokeColor(render_ctx, 1, 1);
+ CGContextSetLineWidth(render_ctx, 0.75);
+ CGContextSetTextDrawingMode(render_ctx, kCGTextFillStroke);
CGContextSetTextMatrix(render_ctx, CGAffineTransformIdentity);
CGContextSetTextPosition(render_ctx, 0, height - baseline);
CTFontDrawGlyphs(font, glyphs, positions, num_glyphs, render_ctx);
+ CGContextRelease(render_ctx);
+ CGColorSpaceRelease(gray_color_space);
}
@tbodt that patch I'd be willing to merge under an option named macos_thicken_fonts or similar.
As for me, Kitty is the best terminal. There are a lot of nice features which I need – in one place. But it seems the only way to not hurt my eyes with Kitty is to use 4K display device? I'm using 2× 27'' 2K monitors and tried some bitmap fonts too but my eyes still aren't pleased with it.
I spent the day comparing Kitty to Gnome Terminal, and Kitty wins in every way apart from the single most important thing, the readability of the text. Without sub-pixel rendering on a 1080p display the fonts look blurry in comparison. I would buy a 4k monitor, but my graphics card doesn't support 4k, so back to Gnome Terminal until this is fixed.
Use
macos_thicken_font 0.75
now it's more readable on a 24" 1080p display
Ref: https://sw.kovidgoyal.net/kitty/conf/#opt-kitty.macos_thicken_font
Without sub-pixel rendering on a 1080p display the fonts look blurry in comparison.
wezterm does sub-pixel rendering: https://wezfurlong.org/wezterm/config/lua/config/freetype_load_target.html ... and I think alacritty does, too.
I just tried out Kitty again on Linux after already using it for a while on Mac, where it looks great on the HiDPI display. Unfortunately, on Linux on a 1080p display, I can't help but notice the blurry fonts.
If someone were to make a patch to support subpixel rendering, does it have a chance of getting merged? Imho subpixel rendering is not as complicated as it is made out to be. For example, I had a brief look at the iTerm2 "insane hack" linked in this thread and I have no idea why they did it in such an overcomplicated way. I've done subpixel rendering with alpha blending before and my technique was much simpler and should work well in a GPU-based renderer. It's not really that different from grayscale antialiasing.
All you need is a mask with three separate channels. Then you multiply that mask with the text color and you're basically done. You can easily obtain this mask by rendering a white glyph on a black background. There is only one caveat: Subpixel antialiasing has to be done in a linear colorspace. But the glyphs you get from the font renderer are typically in sRGB colorspace. What this means is that if you use the mask to render a text of a different color (i.e. not white), then it's not going to look right, e.g. dark text will look too bold. But you can fix this by converting the mask to a linear colorspace first, doing the blending, and then converting the result back to sRGB at the end. That's probably not a big performance hit if you do it on the GPU.
Another option that is well suited for when you can't do the expensive non-linear colorspace conversion (e.g. when rendering on the CPU) is to use two masks instead of one and interpolate between them: First, you render a white on black glyph (= mask 1). Then you render a black on white glyph and invert the colors (= mask 2), so you get again something that's white on black, but different. You only have to render those masks once for each character and can cache them afterwards. When it comes time to render the actual text, you compute the luminosity of the text color and linearly interpolate between the two masks based on that value. So, for example, if the text color was white, you would use only mask 1. If the color was black, you'd only use mask 2. If the text color is 50% gray, then you use 50% of mask 1 and 50% of mask 2 and so on. This is not mathematically accurate, but in my experience it works extremely well. I have tried this with various foreground and background colors and cannot tell the difference between the approximation and the reference rendering.
Would it be difficult to implement one of those approaches in Kitty?
Except, as noted above, CoreText does not actually give you a mask with three channels. And that's not even going into the issues you will have doing sub-pixel alpha blending on non-transparent backgrounds. See #1604
Apple abandoned subpixel antialiasing in modern versions of their OS, there is probably nothing that can be done about that and I wouldn't bother. But for other operating systems it would still be useful. Drawing onto transparent backgrounds is always a problem with subpixel rendering. Most implementations just give up at that point and fall back to grayscale antialiasing. Is that the only problem with that MR?
No idea, the OP never responded so I never actually reviewed it. And that will need refactoring after #5423 so I suggest you wait till that is merged
Kitty is unusable in a 1080p screen. I use Foot, it does subpixel rendering and the performance is perfect just like Kitty. Anyone who has a better screen than me would also have a GPU way better than mine (Intel UHD 620), so it's better to just use foot and see beautiful text again.
For me personally, this is not limited to 1080p. I'm working at 1440p (at 27"), and it still makes a pretty huge difference in terms of font legibility. Considering that human eyesight varies a lot across individuals, I guess it is expected that some users would happily trade-off some rendering performance for better readability.
I'm not sure if there is a problem like "slow terminal rendering". At least I've never encountered that. On the other side, good font readability (keeping font size relatively small) is crucial. Less scrolling is always better.
I'm having a strange bug. I have configured my new system (Arch Linux) to use sub-pixel anti-aliasing in my
~/.config/fontconfig/fonts.conf
Every app I have tried so far follows those rules (Firefox, Chromium, Leafpad...) except Kitty, which uses a default grey anti-aliasing, no matter what I specify.
Using
strace
I can see that it opens the same FontConfig library as the other programs:/usr/lib/libfontconfig.so.1
and that it sources my local config file correctly. But then it seems to just ignore it. Or maybe it turns the glyphs into monochromatic after the font has been rendened? I dunno.I'm using the Sway window manager, which is Wayland-based. Could that be it?