Open utterances-bot opened 6 days ago
We'd love to have you over on the r/FuckTAA subreddit.
I'm under GNU/Linux, I tried Floorp & Chromium, but I'm unable to select higher than MSAA 8x. And no, I'm not on mobile.
I'm using AMD though?
This is a really great article that obviously had an amazing amount of work put into it. I do need to point out, however, that the part about font rendering at the end is, well, hilariously outdated. The "big boys" use the Slug Library to render fonts these days (and you can see exactly who "big boys" means on the website sluglibrary.com). Slug calculates analytic antialiasing directly from the Bézier curves in a pixel shader, and it does not need to check all of the curves for a glyph at each pixel. Performance is thus extremely good (or else all those big companies wouldn't be using it), and the output quality is unbeatable.
You also have a broken link to: https://github.com/FrostKiwi/treasurechest/blob/main/posts/analytical-anti-aliasing/FXAA-3.11.glsl
@melroy89
You also have a broken link to:
Thanks, fixed 👍
but I'm unable to select higher than MSAA 8x. [...] I'm using AMD though?
Guess that's how that particular driver handles it. ¯\_(ツ)_/¯ On my systems with Nvidia's proprietary driver it's MSAA 32x on Linux and MSAA 8x on Windows. With the integrated intel iGPUs it's MSAA 16x everywhere.
@EricLengyel That's true! Should definitely mention it. Included it in the post as a little note. I'm got introduced to Slug this year through a hacker news post and I also got really interested, as I'm also doing text interfaces in a Mixed Reality project. Merely mentioning Text rendering derailed the blog post quite a bit and the post was already long enough I thought...
Not bad article though the TAA bashing isn't really needed. TAA makes a reasonable tradeoff for performance to accumulate samples temporally rather than attempting to do collect them all one frame like supersampling does (as doing that is just not practical these days with how costly shading is). Nothing about TAA is inherently "blurrier" really, in fact a perfect TAA implementation is essentially indistinguishable from super-sampling (easy to prove by just leaving a static camera on a scene).
Of course in practice the tricky part of TAA is knowing when to discard historical samples and many a bad implementation out there result in significant ghosting which gives it a bad reputation. Such problems are actually not that hard to fix though with more advanced heuristics, which is exactly what technology like DLSS does. In motion you'd be hard pressed to actually see much ghosting with a proper implementation of such technology. Worth nothing just because you can see an artifact in a still image does not mean you can see it temporally. The human eye has a limited spatial/temporal acuity and detail beyond that threshold is literally impossible to see (hard to quantify for more complex cases like this, but freeze frames which try to point out such things are deceptive none the less).
Finally using the term blur as a disparaging thing is also not really a good thing either. Mathematically perfect image reconstruction methods require a blur, that is just how signal processing works. Call it a kernel or filter if it makes you feel better, but a blur is always required to turn discrete samples into a continuous signal.
At the end of the day analytical methods like this may work fine in simple cases as demonstrated (so perhaps good for UI/font/etc) but they do not handle things like textures which due to their raster nature have aliasing challenges regardless of how the geometry is rendered, or the reality of finite amounts of lighting samples per pixel that need to be resolved (no amount of magic math is going to describe a complex scene's lighting solution analytically). TAA is pretty much just the only modern viable approach to solving all these problems at once with minimal overhead at the cost of some artifacts which is why it's used.
@presentfactory
TAA bashing isn't really needed.
Yeah, it turned out a bit that way. Initially I wanted a whole section on it, but it grew too long. There is a lot of technical stages to cover from jitter to reprojection.
do not handle things like textures
You can texture SDFs and mip-mapping will take care of the worst shimmering, but it's true that you can't get more than one sample per pixel that way. High frequency normal maps will indeed alias, which is something only TAA can solve adequately.
TAA is pretty much just the only modern viable approach
For general purpose 3D scenes, it really is. No amount of samples will save you from dense foliage shimmering wildly and TAA really is king here. It's just not what this post was about and I should have communicated that more clearly 👍
@presentfactory Added an addendum to address this.
GPUs have recently started exposing the barycentric coordinates of the triangles right in the fragment shader with GL_EXT_fragment_shader_barycentric
! So you can get cheap and pretty anti-aliased wireframes using SDF tricks now too.
AAA - Analytical Anti-Aliasing
How to fix jaggies the analytical way with some juicy secrets
https://blog.frost.kiwi/analytical-anti-aliasing/