PaintLab / PixelFarm

From Vectors to (sub) Pixels, C# 2D Rendering Library
Other
156 stars 20 forks source link

[INFO] Excerp from Agg #15

Closed prepare closed 6 years ago

prepare commented 6 years ago

Maxim's note on C++ version:

see https://pdfium.googlesource.com/pdfium/+/master/third_party/agg23/agg_rasterizer_scanline_aa.cpp#35

... The author gratefully acknowleges the support of David Turner, Robert Wilhelm, and Werner Lemberg - the authors of the FreeType libray - in producing this work. See http://www.freetype.org for details.

Initially the rendering algorithm was designed by David Turner and the other authors of the FreeType library - see the above notice. I nearly created a similar renderer, but still I was far from David's work. I completely redesigned the original code and adapted it for Anti-Grain ideas. Two functions - render_line and render_hline are the core of the algorithm - they calculate the exact coverage of each pixel cell of the polygon. I left these functions almost as is, because there's no way to improve the perfection - hats off to David and his group!

All other code is very different from the original.


Agg

from http://www.antigrain.com/doc/introduction/introduction.agdoc.html

(Copyright © 2002-2006 Maxim Shemanarev)

... Yet Another Invention of the Wheel Anti-Grain Geometry is not a solid graphic library and it's not very easy to use. I consider AGG as a “tool to create other tools”. It means that there's no “Graphics” object or something like that, instead, AGG consists of a number of loosely coupled algorithms that can be used together or separately. All of them have well defined interfaces and absolute minimum of implicit or explicit dependencies.


Anti-Aliasing and Subpixel Accuracy

Anti-Aliasing is a very well known technique used to improve the visual quality of images when displaying them on low resolution devices. It's based on the properties of the human vision. Look at the following picture and try to guess what it means. ..

Anti-Aliasing doesn't make you see better, it basically makes you brain work better and reconstruct missing details. The result is great. It allows us to draw much more detailed maps for example.

...

prepare commented 6 years ago

But the point is not only in Anti-Aliasing itself. The point is we can draw primitives with Subpixel Accuracy. It's especially important for the visual thickness of the lines. First, let us see that even with simple Bresenham line interpolator we can achieve a better result if we use Subpixel Accuracy. The following picture shows enlarged results of the simple Bresenham interpolator.

subpixel_bresenham A Bresenham Line Rendered with Subpixel Accuracy

Consider cases (2) and (3). The thin black lines are what we need to interpolate. If we use Subpixel Accuracy we will really have two different sets of pixels displayed, despite of the fact that the begins and ends of both lines fall into the same pixels. And the lines have really different tangents, which is very important. If we use a classical Bresenham, without considering the Subpixel Accuracy we will see result (1) in all cases. That's especially important to approximate curves with short line segments. But if we use Anti-Aliasing plus Subpixel Accuracy we can do much better. Look at that difference.

prepare commented 6 years ago

aliased_pix_accuracy _pic 1: aliased_pixaccuracy

aliased_subpix_accuracy

_pic 2: aliased_subpixaccuracy

anti_aliased _pic 3: antialiased

Here all three spirals are approximated with short straight line segments. The left one is drawn using regular integer Bresenham, when the coordinates are rounded off to pixels (you will have a similar result if you use Winwows GDI MoveTo/LineTo, for example). The one in the middle uses a modified integer Bresenham with precision of 1/256 of a pixel. And the right one uses the same 1/256 accuracy, but with Anti-Aliasing. Note that it's very important to have a possibility of real subpixel positioning of the line segments. If we use regular pixel coordinates with Anti-Aliasing, the spiral will look smooth but still, as ugly as the one on the left.

--

prepare commented 6 years ago

line_thickness pic 1: Lines Rendered with Anti-Aliasing and Subpixel Accuracy

subpixel_accuracy1

pic 2: Circles Rendered with Anti-Aliasing and Subpixel Accuracy

subpixel_accuracy2 pic 3: Cute Lions

Note that the appearance of the small ones remains consistent despite of lost details.

prepare commented 6 years ago

Below is a typical brief scheme of the AGG rendering pipeline.

typical_scheme

Please note that any component between the “Vertex Source” and “Screen Output” is not mandatory. It all depends on your particular needs. For example, you can use your own rasterizer, based on Windows API. In this case you won't need the AGG rasterizer and renderers. Or, if you need to draw only lines, you can use the AGG outline rasterizer that has certain restrictions but works faster. The number of possibilities is endless.

  • Vertex Source is some object that produces polygons or polylines as a set of consecutive 2D vertices with commands like “MoveTo”, “LineTo”. It can be a container or some other object that generates vertices on demand.

  • Coordinate conversion pipeline consists of a number of coordinate converters. It always works with vectorial data (X,Y) represented as floating point numbers (double). For example, it can contain an affine transformer, outline (stroke) generator, some marker generator (like arrowheads/arrowtails), dashed lines generator, and so on. The pipeline can have branches and you also can have any number of different pipelines. You also can write your own converter and include it into the pipeline.

  • Scanline Rasterizer converts vectorial data into a number of horizontal scanlines. The scanlines usually (but not obligatory) carry information about Anti-Aliasing as “coverage” values.

  • Renderers render scanlines, sorry for the tautology. The simplest example is solid filling. The renderer just adds a color to the scanline and writes the result into the rendering buffer. More complex renderers can produce multi-color result, like gradients, Gouraud shading, image transformations, patterns, and so on.

-Rendering Buffer is a buffer in memory that will be displayed afterwards. Usually but not obligatory it contains pixels in format that fits your video system. For example, 24 bits B-G-R, 32 bits B-G-R-A, or 15 bits R-G-B-555 for Windows. But in general, there're no restrictions on pixel formats or color space if you write your own low level class that supports that format.

prepare commented 6 years ago

Colors, Color Spaces, and Pixel Formats

Colors in AGG appear only in renderers, that is, when you actually put some data to the rendering buffer. In general, there's no general purpose structure or class like “color”, instead, AGG always operates with concrete color space. There are plenty of color spaces in the world, like RGB, HSV, CMYK, etc., and all of them have certain restrictions. For example, the RGB color space is just a poor subset of colors that a human eye can recognize. If you look at the full CIE Chromaticity Diagram, you will see that the RGB triangle is just a little part of it.

cie_1931 CIE Chromaticity Diagram and the RGB Gamut

In other words there are plenty of colors in the real world that cannot be reproduced with RGB, CMYK, HSV, etc. Any color space except the one existing in Nature is restrictive. Thus, it was decided not to introduce such an object like “color” in order not to restrict the possibilities in advance. Instead, there are objects that operate with concrete color spaces. Currently there are agg::rgba and agg::rgba8 that operate with the most popular RGB color space (strictly speaking there's RGB plus Alpha). The RGB color space is used with different pixel formats, like 24-bit RGB or 32-bit RGBA with different order of color components. But the common property of all of them is that they are essentially RGB. Although, AGG doesn't explicitly support any other color spaces, there is at least a potential possibility of adding them. It means that all class and function templates that depend on the “color” type are parameterized with the “ColorT” argument.

prepare commented 6 years ago

Coordinate Units

Basically, AGG operates with coordinates of the output device. On your screen there are pixels. But unlike many other libraries and APIs AGG initially supports Subpixel Accuracy. It means that the coordinates are represented as doubles, where fractional values actually take effect. AGG doesn't have an embedded conversion mechanism from world to screen coordinates in order not to restrict your freedom. It's very important where and when you do that conversion, so, different applications can require different approaches. AGG just provides you a transformer of that kind, namely, that can convert your own view port to the device one. And it's your responsibility to include it into the proper place of the pipeline. You can also write your own very simple class that will allow you to operate with millimeters, inches, or any other physical units.

--

Internally, the rasterizers use integer coordinates of the format 24.8 bits, that is, 24 bits for the integer part and 8 bits for the fractional one. In other words, all the internal coordinates are multiplied by 256. If you intend to use AGG in some embedded system that has inefficient floating point processing, you still can use the rasterizers with their integer interfaces. Although, you won't be able to use the floating point coordinate pipelines in this case.

prepare commented 6 years ago

As it was said before AGG provides many different levels of functionality, so that you can use it in many different ways. For example, you may want to use AGG rasterizers without the scanline renderers. But for the sake of consistency and graduality we will start from the very beginning and describe all the functionality with examples. This approach might be slower than some “Quick Start”, but it will allow you to understand the conceps of the design. It is really useful because you will know how to replace certain classes and algorithms with your own, or how to extend the library. Particularly, the scanline renderers are platform independent, but not the fastest. You may want to write your own, optimized ones, but oriented to some hardware archtecture, like SSE2.

prepare commented 6 years ago

Gamma Correction , Using Gamma Correction in Anti-Aliasing

_(from http://antigrain.com/research/gamma_correction/index.html)_

... using a simple linear dependance Pixel Coverage → Brightness is not the best and should be corrected. In color management it's called Gamma Correction. For gamma correction I use a simple array of 256 values that give the desired value of brightness depending on the pixel coverage. If all the values in the array are equal to their index, i.e., 0,1,2,3,4,… it means that there's no gamma correction. The array can be calculated using any approach, but the simplest method is to use a B-Spline curve with two reference points and four coeffitiens (kx1, ky1, kx2, ky2) that determine its shape. ...

... We actually can obtain much better result of certain thickness and brightness, but it cannot be used for general case. ...

... Besides, the gamma correction strongly depends on the content of the image. The values that are good enough for rendering ellipses like shown above may give a very bad result when rendering small text glyphs ...

prepare commented 5 years ago

Agg's Interpolation with Bezier Curves

A very simple method of smoothing polygons by Maxim Shemanarev

_(http://www.antigrain.com/research/bezier_interpolation/index.html#PAGE_BEZIER_INTERPOLATION)_

... ... Finally, I found a very simple method that does not require any complicated math. First, we take the polygon and calculate the middle points Ai of its edges. bezier_interpolation_s1


Here we have line segments Ci that connect two points Ai of the adjacent segments. Then, we should calculate points Bi as shown in this picture.

bezier_interpolation_s2


The third step is final. We simply move the line segments Ci in such a way that their points Bi coincide with the respective vertices. That's it, we calculated the control points for our Bezier curve and the result looks good.

bezier_interpolation_s3

One little improvement. Since we have a straight line that determines the place of our control points, we can move them as we want, changing the shape of the resulting curve. I used a simple coefficient K that moves the points along the line relatively to the initial distance between vertices and control points. The closer the control points to the vertices are the sharper figure will be obtained.

... ...

the below are our implementation ...

2018-12-31_15-28-50


(this implementation https://github.com/PaintLab/PixelFarm/commit/2eba32f52fa7d1fa39ba7c8f7a5d735b80a32152 is not optimized yet)

2018-12-31_20-04-09