Introduction
I wanted a graphics library that was faster and better than what I had found for various IoT devices. The most popular is probably Adafruit_GFX, but it's not optimal. It's very minimalistic, not very optimized, and doesn't have a fully decoupled driver interface. Also, being tied to the Arduino framework, it can't take advantage of platform specific features that increase performance, like being able to switch between interrupt and polling based SPI transactions on the ESP32.
GFX on the other hand, isn't tied to anything. It can draw anywhere, on any platform. It's basically standard C++, and things like line drawing and font drawing algorithms. Without a driver, it can only draw to in memory bitmaps, but once you add a driver to the mix, you can draw directly onto displays the same way you do to bitmaps.
Update: Some minor bugfixes, SPI drivers are refactored to use a common base, more drivers are now added, and one click configuration for generic ESP32 boards is now available
Update 2: Included support for the LilyGo TTGO board, as well as the green tab 128x128 1.44" ST7735 display (though other green tab models may work too they have not been tested)
Update 3: Added transparent_color
argument to draw::bitmap<>()
so you can do sprites.
Update 4: GFX draw::rectangle<>()
bugfix and added experimental
MAX7219 driver
Update 5: Bug fixes with bitmap drawing
Update 6: Fixed some of the demos that were failing the build
Update 7: Added alpha blending support!
Update 8: Some API changes, added large_bitmap<>
support and edged
a little closer to adaptive indexed color support. I also used the large
bitmap for the frame buffer in the demos, and changed that code to be
easier to understand at the cost of raw pixel throughput.
Update 9: Added polygon and path support, and cleaned up the API here and there a bit
Update 10: Fixed several build errors with the last update. Mea culpa.
Update 11: Added bilinear and bicubic resampling for
draw::bitmap<>()
resizing options, and fixed the nearest neighbor
drawing. Also improved the performance of the drawing during resize
(though bicubic could stand to be optimized to use ints instead of
floats). Added image dimensions
to jpeg_image::load()
's callback.
Update 12: Easy of use update. I've compiled all of the includes
into a single includable header, and I've added draw::image<>()
which
deals with the progressive loading so you don't need to do it yourself.
Update 13: Service release. Certain draw operations between certain draw targets would fail to compile
Update 14: Added palette/CLUT support! (still a baby, not quite complete but I'll mature it as I go)
Update 15:
Service release. Fixed large_bitmap<>
out of bounds crashing issue
Update 16: Added Arduino framework support and several drivers. I have not tested the Arduino framework support with anything other than an ESP32, but it will not necessarily compile on all Arduinos.
Update 17: Added support for two e-ink/e-paper displays: the DEP0290B (and the associated LilyGo T5 2.2 board) as well as the GDEH0154Z90 (WaveShare 1.54 inch 3-color black/white/red display). The former only works on the Arduino Framework for now, until I hunt down the issue under the ESP-IDF that prevents it from working.
Update 18: Added dithering support for e-paper displays
Update 19: Added TrueType font support!
Update 20: Service release - fixed a stream.hpp bug and updated platformio.ini to build for the newer ESP-IDF
Update 21: A bugfix, and the addition of the viewport<>
template
class that allows for rotation and offsetting of drawing operations.
Update 22: Added performance improvement to viewport<>
and changed
API slightly. Updated the RA8875 driver with performance improvements,
hardware scrolling, and touch support.
Update 23: Restructure of source files, plus some fixes with compile time errors with certain combinations of draw targets and draw operations. This is no longer a header only library because I needed to expand its targeting past IoT devices, and be used in more build situations/environments. The header only library only worked without conflicts under certain build environments. Furthermore I've added TFT_eSPI bindings/drivers for superior performance and additional device support.
Update 24: Added Windows DirectX support for rapid prototyping. Fixed compile errors under certain environments.
Update 25: Added new driver structure for arduino drivers with a decoupled bus framework, better SPI performance and 8-bit parallel support. Currently used by the ili9341 and st7789 drivers until I can test the others, which I have to buy again because I gave them away.
Update 26: All LCD/TFT/OLED drivers except the RA8875 use the new driver framework. For information on these see here.
You'll need Visual Studio Code with the Platform IO extension installed. You'll need an ESP32 with a connected ILI9341 LCD display an SSD1306 display, or other display depending on the demo.
I recommend the Espressif ESP-WROVER-KIT development board which has an integrated ILI9341 display and several other pre-wired peripherals, plus an integrated debugger and a superior USB to serial bridge with faster upload speeds. They can be harder to find than a standard ESP32 devboard, but I found them at JAMECO and Mouser for about $40 USD. They're well worth the investment if you do ESP32 development. The integrated debugger, though very slow compared to a PC, is faster than you can get with an external JTAG probe attached to a standard WROVER devboard.
Most of you however, will be using one or more of the generic esp32 configurations. First decide which framework you want to use - ESP-IDF, or Arduino. At the bottom of the screen in the blue bar of VS Code, there is a configuration switcher. It should be set at Default to start, but you can change it by clicking on default. A list of many configurations will drop down from the top of the screen. From there, you can choose which generic esp32 setup you have, based on the display attached to it, and with it, which framework you want to use.
In order to wire all this up, refer to wiring_guide.txt which has
display wirings for SPI and I2C displays. Keep in mind some display
vendors name their pins with non-standard names. For example, on some
displays MOSI
might be labled as DIN
or A0
. You make have to do
some googling to find out the particulars for your device.
Before you can run it, you must Upload Filesystem Image under the Platform IO sidebar - Tasks.
Note: The Platform IO IDE is kind of cantankerous sometimes. The
first time you open the project, you'll probably need to go to the
Platform IO icon on the left side - it looks like an alien. Click it to
open up the sidebar and look under Quick Access|Miscellaneous for
Platform IO Core CLI. Click it, and then when you get a prompt type
pio run
to force it to download necessary components and build. You
shouldn't need to do this again, unless you start getting errors again
while trying to build. Also, for some reason, whenever you switch a
configuration you have to go and refresh (the little circle-arrow next
to "PROJECT TASKS") before it will take.
The demos are not all the same despite GFX capabilities being pretty much the same for each display. The reason is that there are no asynchronous operations for I2C devices, and because some of the displays simply aren't big enough to show the jpgs or they are monochrome like the SSD1306, so the JPG would look a mess.
The following configurations are currently supported. Select the one that you want to use before building:
The MAX7219 CS pin is 15, not 5. Driver is experimental and multiple rows of segments does not work, but multiple segments in a single row works.
TFT_eSPI by Bodmer is probably the fastest TFT driver for the Arduino
framework. It really is a great offering, but it works a lot differently
that GFX, and has different features. With the gfx_tft_espi
driver you
can use GFX with TFT_eSPI and enjoy higher framerates and even more
supported displays, while taking advantage of GFX features like True
Type and flexible pixel formats.
I do not provide Bodmer's library with this distribution, for several
reasons, and I don't recommend using it via lib_deps
/ fetching it
using platform IO's automated installer. Download it from Bodmer's
Github repository here and put it
under the libs folder in the gfx_demo project. Switch to the
arduino-TFT_eSPI configuration. As long as this library is in that
folder the other projects won't build. Unfortunately I haven't found a
way to work around this with PlatformIO without modifying TFT_eSPI
itself which is a non-starter.
Here's the process:
First go to PlatformIO/Quick Access/Miscellaneous/PlatformIO Core CLI, open a new console and type:
pio platform install "windows_x86"
Go to your project, and open /src/windows/directx_demo.hpp. Once
there, change #define FONT_PATH "Maziro.ttf"
so that the path matches
the full path to that font file, which will be under the /fonts
folder.
Next, this is a little tricky, due to an inconsistency between
Microsoft's C++ compiler, and GCC. You must patch a system header, or
your programs will crash. The patch won't hurt other programs, in fact,
the other programs will still crash on the patched header if compiled
with GCC. This is because you have to
define WIDL_EXPLICIT_AGGREGATE_RETURNS
in order for the patched code
to take effect.
Now you need to build the Windows configuration to ensure PlatformIO has all of the files it needs.
Next you need to locate PlatformIO's copy of the system header d2d1.h.
You can do so by opening windows/drivers/dxd2d.hpp and finding
#include <d2d1.h>
and then right clicking on the filename and
selecting "Go to Definition." One there, right click on it's tab to copy
the full path.
Next use the Windows search bar to search for Notepad, right click on the result, and click "Run as Administrator."
Once it's open, go to open a file, and paste the path into the dialog box, and hit enter.
Go to VS Code, and copy the contents of d2d1_patched_minigw.h and paste them into your Notepad instance. Finally, save.
Now go to PlatformIO and Clean your project. This must be done.
Now you can build a working application. Keep in mind Windows works
dramatically different in terms of flow than an IoT application. You
must render each frame on the WM_PAINT
event, so the structure of the
demo is far different than the others, employing a state machine to make
a coroutine out of the lines demo.
In order to run the application, open a console again. Then you need to type the following command:
.pio\build\windows-DirectX\program.exe
The driver is not very fast, due to the "polarity mismatch" between DirectX and GFX. DirectX works far differently than GFX and bridging the gap is not efficient. This driver is meant for prototyping screens, and for that it is effective, since it shortens build times and eliminates upload times.
Black and white e-paper display drivers can virtualize an expanded bit depth to emulate grayscales through dithering. The final template parameter of the driver indicates the bit depth and defaults to 1, which disables dithering.
Color e-paper displays dither or they will match the nearest color among their palette. If you need better dithering performance, you can predither your JPEGs in a paint program targeting the e-paper display's palette and then disable virtualization. The final template parameter indicates the pixel type of the color pixel you wish to virtualize.
Keep in mind the higher the virtualized bit depth, the more memory the driver will use.
Many of the Arduino drivers support the new driver framework and as such can operate in I2C, 8-bit parallel or SPI. The SPI is also DMA capable and performs better than the older SPI based framework, and can read from devices that support it.
Using it requires some additional steps since the driver is independent of the bus.
First, include the I/O header:
#include "drivers/common/tft_io.hpp"
You must initialize your bus. Keep in mind there are restrictions on the data and DR pins for parallel. To be safe, they should be in the GPIO range of 0-31 in order to work. I've included code to make them work otherwise but I have not tested it.
To initialize the SPI bus:
using bus_type = tft_spi<VSPI,PIN_NUM_CS,PIN_NUM_MOSI,PIN_NUM_MISO,PIN_NUM_CLK,SPI_MODE0,
40*1000*1000,
20*1000*1000,
40*1000*1000,
true
,320*240*2+8>;
The above initializes the bus on VSPI to do 40Mhz writes, 20MHz reads, and uses DMA.
Initializing the parallel bus is similar, except all you need are the pin numbers.
Once you have the bus type instantiated, you pass it to your driver template:
using lcd_type = ili9341<PIN_NUM_DC,PIN_NUM_RST,PIN_NUM_BCKL,bus_type,3>;
For information on this see here.
Below is a high level summary. For more detailed information, I've been producing a series which begins at the link.
GFX introduces the idea of draw sources and destinations. These are vaguely or loosely defined types that expose a set of members that allow GFX to bind to them to perform drawing operations. The set of members they expose varies based on their capabilities, and GFX adapts to call them in the most efficient manner that the target supports for the given operation. The drawing functions in GFX take destinations, and some operations such as copying bitmapped pixel data or otherwise reading pixel information take sources.
The code size is related to how many different types of sources and destinations you have, and what kind of drawing or copying you're doing between them. The more diverse your sources and destinations, the larger the code.
Draw sources and destinations include in memory bitmaps and display drivers, but you can potentially craft your own. We'll be getting into how later.
Bitmaps are sort of all-purpose draw source/destinations. They basically allow you to hold drawn data around in memory, which you can then draw somewhere else. Bitmaps do not hold their own memory. This is because not all memory is created equal. On some platforms for example, in order to enable a bitmap to be sent to the driver, the bitmap data must be stored in DMA capable RAM. The other reason is so you can recycle buffers. When you're done with a bitmap you can reuse the memory for another bitmap (as long as it's big enough) without deallocating or reallocating. The disadvantage is a small amount of increased code complexity wherein you must determine the size you need for the bitmap, and then allocate the memory for it yourself, and possibly freeing it yourself later.
Drivers are a typically a more limited draw target, but may have
performance features, which GFX will use when available. You can check
what features draw sources and destinations have available using the
caps
member so that you can call the right methods for the job and for
your device. You don't need to worry about that if you use the draw
class which does all the work for you. Drivers may have certain
limitations in terms of the size of bitmap they can take in one
operation, or performance characteristics that vary from device to
device. As much as possible, I've tried to make using them consistent
across disparate sources/destinations, but some devices, like e-paper
displays are so different that care must be taken when using them in
order to employ them in the way that is most effective.
You can implement your own draw sources and destinations by simply writing a class with the appropriate members. We'll cover that toward the end.
Pixels in GFX may take any form you can imagine, up to your machine's maximum word size. They can have as many as one channel per bit, and will take on any binary footprint you specify. If you want a 3 bit RGB pixel, you can easily make one, and then create bitmaps in that format - where there are 3 pixels every 9 bits. You'll never have to worry whether or not this library can support your display driver or file format's pixel and color model. With GFX, you can define the binary footprint and channels of your pixels, and then extend GFX to support additional color models other than the 4 it supports simply by telling it how to convert to and from RGB*.
*Indexed pixel formats are accounted for, but there are certain restrictions in place and care must be taken when using them because they need an associated palette in order to resolve to a color.
When you declare a pixel format, it becomes part of the type signature for anything that uses it. For example, a bitmap that uses a 24-bit pixel format is a different type than one that uses a 16-bit pixel format.
Due to this, the more pixel formats you have, the greater your code size will be.
If you create pixels with an alpha channel, it will be respected on supported destinations. Not all devices support the necessary features to enable this. It's also a performance killer, with no way to make it faster without hardware support, which isn't currently available for any supported device. Typically, it's best to alpha blend on a bitmap, and then draw the bitmap to a driver due to performance issues and the fact that many drivers currently do not act as draw sources, and so cannot alpha blend.
Some draw destinations use an indexed color scheme, wherein they might have 16 active colors for example picked from a palette of 262,144 possible colors. Or they may have a fixed set of 8 active colors and that's all. The active colors are all that can be displayed at any given point. This was common on older systems with limited frame buffer sizes. It may also be the case with some IoT display hardware, especially color e-paper displays. e-paper displays range from 2 color (monochrome) to 7 color that I've seen.
Draw targets that use an indexed pixel for their pixel_type
- that is,
devices with pixels that have a channel with a channel_name
of
index
, are expected to expose a palette_type
type alias that
indicates the type of the palette they are using, as well as a
palette_type* palette() const
method that returns a pointer to the
current palette.
When you draw to a target that has indexed pixels, a best effort attempt is made to match the requested color with one of the active colors in the palette. It will match the closest color it finds. This isn't free. It's pretty CPU intensive so buyer beware, especially when loading JPEGs or something into an indexed target. It will have to run a nearest match on every single pixel, scanning through the palette once for every pixel!
You have to be careful with indexed colors. They can't be used in
isolation, because without a palette you don't have enough information
to get a color from it. Draw targets can have palettes, so the
combination of an indexed pixel and a matching draw target yields a
valid color. Because of this you can't use the color<>
template with
indexed colors, for example, because without a palette there's no way to
know what index value best corresponds to, for example old_lace
, or
even white
. You'll hopefully get compile errors when you try to use
indexed pixels in places where they can't be used, but worse case, you
get errors at run time when trying to draw. When trying to draw, this is
usually not something you have to worry about much.
If you must translate indexed pixels to other kinds of pixels
yourself can use convert_palette_from<>()
, convert_palette_to<>()
and convert_palette<>()
. These will convert to and from indexed colors
optionally alpha blending in the process. They take draw targets in
order to get access to the palette information.
Drawing elements are simply things that can be drawn, like lines and circles.
Drawing primitives include lines, points, rectangles, arcs, ellipses and polygons each except the first two in filled and unfilled varieties. Most take bounding rectangles to define their extents, and for some such as arcs and lines, orientation of the rectangle will alter where or how the element is drawn.
Draw sources again, are things like bitmaps, or display drivers that support read operations. These can be drawn to draw destinations, again like other bitmaps or display drivers that support write operations. The more different types of source and destination combinations that are used in draw operations, the larger the code size. When drawing these, the orientation of the destination rectangle indicates whether to flip the drawing along the horizontal or vertical axes. The draws can also be resized, cropped, or pixel formats converted.
GFX supports two types of fonts. It supports a fast raster font and True
Type or Open Type fonts, depending on your needs. If you need quick and
dirty, with the emphasis on quick, use the font
class. For pretty,
scalable and potentially anti-aliased fonts at the expense of
performance, use the open_font
class.
The behavior and design of each are slightly different due to different capabilities and different performance considerations. For example, raster fonts are always allocated in either RAM or PROGMEM space. This is because they are small and so that they will operate at maximum speed. TrueType fonts on the other hand, are larger, much more complicated fonts, and so GFX will stream them directly from a file as needed, trading speed for minimal RAM use. Unlike raster fonts, True Type fonts are essentially not loaded into memory, and will only cause temporary memory allocations as needed to render text.
Both font
and open_font
can be loaded from a readable, seekable
stream such as a file_stream
. This if anything, will make
font
slightly quicker and open_font
much slower than when you embed
them. The raster fonts are old Windows 3.1 FON files while the True Type
font files are platform agnostic TTF and OTF files.
Alternatively, you can use the fontgen to create a C++ header file
from a font file. This header can then be included in order to embed the
font data directly into your binary. This is a static resource rather
than loaded into the heap. This is the recommended way of loading fonts
when you can, especially with open_font
.
When fonts are drawn, very basic terminal control characters like tab, carriage return, and newline are supported. Raster fonts can be drawn with or without a background, though it's almost always much faster to draw them with one, at least when drawing to a display.
True Type fonts must usually be downscaled from their native size before
being displayed. You can use the scale()
method passing the desired
font height in pixels.
Note that sizes and positions with True Type are somewhat approximate in that they don't always reflect what you think they might. Part of that is nature of digital typesetting and part of that is because non-commercial font files often have bad font metrics in them. It usually takes some trial and error to get them pixel perfect.
Also note that unlike raster fonts, True Type font glyphs aren't limited
to a bounding box. They can overhang part of the letter outside of the
specified draw area which can lead to the left and top edges of letters
in your destination area being clipped. Fortunately, you can draw text
with an offset
parameter to offset the text within the drawing area to
avoid this, and/or to adjust the precise position of the text.
In addition to loading and providing basic information about fonts, the
font
and open_font
classes also allow you to measure how much space
a particular region of text will require in that font.
Images include things like JPEGs, which is currently the only format this library supports, although PNG will be included soon-ish.
Images are not drawing elements because it's not practical to either load an image into memory all at once, nor get random access to the pixel data therein, due to compression or things like progressively stored data.
In order to work with images, you can use the draw
class which will
draw all or part of the image to a destination, or you can handle a
callback that reports a portion of an image at a time, along with a
location that indicates where the portion is within the image. For
example, for JPEGs a small bitmap (usually 8x8 or so) is returned for
each 8x8 region from left to right, top to bottom. Each time you receive
a portion, you can draw it to the display or some other target, or you
can postprocess it or whatever.
Currently unlike with fonts, there is no tool to create headers that embed images directly into your binary. This will probably be added in a future version.
For the most part, GFX makes a best effort attempt to reduce the number of times it has to call the driver (with the exception of batch writes), even if that means working the CPU a little harder. For example, instead of drawing a diagonal line as a series of points, GFX draws a line as series of horizontal or vertical line segments. Instead of rending a font dot by dot, GFX will batch if possible, or otherwise use run lengths to draw fonts as a series of horizontal lines instead of individual points. Trading CPU for reduced bus traffic is a win because usually you have a lot more of the former than the latter to spare, not that you have much of either one. GFX is relatively efficient, eschewing things like virtual function calls, but most of the gain is through being less chatty at the bus level.
That said, there are ways to significantly increase performance by using features of GFX which are designed for exactly that.
One way to increase performance by reducing bus traffic is by batching. Normally, in order to do a drawing operation, a device must typically set the target rectangle for the draw beforehand, including when setting a single pixel. On an ILI9341 display, this resolves to 6 spi transactions in order to write a pixel, due to DC line control being required which splits each command into two transactions.
One way to cut down on that overhead dramatically is to set this address window, and then fill it by writing out pixels left to right, top to bottom without specifying any coordinates. This is known as batching and it can cut bus traffic and increase performance by orders of magnitude. GFX will use it for you when it can, and if it's an available feature on the draw destination.
The primary disadvantage to this is simply the limited opportunities to
use it. It's great for things like drawing filled rects, or drawing
bitmaps when blting and DMA transfers are unavailable, but you have to
be willing to fill an entire rectangle with pixels in order to use it.
If you're drawing a font with a transparent background or a 45 degree
diagonal line for example, GFX effectively won't be able to batch.
Batching can occur whenever there is an entire rectangle of pixels which
must be drawn, and a direct transfer of bitmap data isn't possible,
either because the source or target doesn't support it, or because some
sort of operation like bit depth conversion or resizing needs to happen.
Batching makes for a great way to speed up these types of operations.
You can't use it directly unless you talk right at the driver, because
at the driver level, using it incorrectly can create problems in terms
of what is being displayed. GFX will use it wherever possible when you
use draw
.
Some devices will support double buffering. This can reduce bus traffic when the buffer is in local RAM such as it is with the SSD1306 driver, but even if it's stored on the display device using suspend and resume can make your drawing not flicker so much. What happens is once suspended, any drawing operations are stored instead of sent to the device. Once resumed, the drawn contents get updated all at once. In some situations, this can generate more bus traffic than otherwise, because resuming typically has to send a rectangle bounding the entire modified section to the device. Therefore, this is more meant for smooth drawing than raw throughput. GFX will use it automatically when drawing, but you can use it yourself to extend the scope of the buffered draw, since GFX wouldn't know for example, that you intended to draw 10 lines before updating the display. It will however, buffer on a line by line basis even if you don't, again if the target supports it.
Asynchronous operations, when used appropriately, are a powerful way to increase the throughput of your graphics intensive application. The main downside of using it, is typically asynchronous operations incur processing overhead compared to their synchronous counterparts, coupled with the fact that you have to use it very carefully, and when transferring lots of data in order to get a benefit out of it, which means lots of RAM use.
However, when you need to transfer a lot of a data at a time, using asynchronous operations can be a huge win, allowing you to fire off a big transfer in the background, and then almost immediately begin drawing your next frame, well before the transfer completes.
Typically in order to facilitate this, you'll create two largeish bitmaps (say, 320x16) and then what you do is you draw to one while sending the other asynchronously, and then once the send is done, you flip them so now you're drawing on the latter one while sending the first one that you just drew.
Drawing bitmaps asynchronously is really the only time you're going to see throughput improvements. The reason there are other asynchronous API calls as well is usually in order to switch from asynchronous to synchronous operations all the pending asynchronous operations in the target's queue have to complete, so basically after you queue your bitmap to draw, you can continue to queue asynchronous line draws and such in order to avoid having to wait for the pending operations to complete. However, when you're using the flipping bitmaps method above to do your asynchronous processing, these other asynchronous methods won't be necessary, since drawing to bitmaps is always synchronous, and has nothing to do with bus traffic or queuing asynchronous bus transactions. Drawing synchronously to bitmaps does not affect the asynchronous queue of the draw target. Each asynchronous queue is specific to the draw source or destination in question.
The ESP-IDF is capable of doing asynchronous DMA transfers over SPI, but right now the overall SPI throughput is less than the Arduino framework. I'm investigating why this is. I have related issues that are preventing me from supporting certain devices under the ESP-IDF, like the RA8875. The ESP-IDF drivers currently do not support 8-bit parallel. This will be added in the future.
The Arduino Framework's SPI interop is tightly timed and fast, but doesn't support asynchronous DMA transfers natively but some ESP32 drivers will - and doesn't seem to have a facility for returning errors during SPI read and write operations. Therefore I think it's less likely for wiring problems to be reported with the Arduino versions of the drivers, but I'm not exactly sure since I haven't tried to create such a scenario to test with. The Arduino framework is more likely to support a device than the ESP-IDF due to differences in the SPI communication API characteristics and behavior. The Arduino drivers include 8-bit parallel support.
Include gfx.hpp (C++17) or gfx_cpp14.hpp (C++14) to access GFX. Including both will choose whichever is first, so don't include both. Pick which one you need depending on the C++ standard you are targeting.
For the ESP-IDF toolchain under platform IO I've only been able to get
it to target up to C++14. The gcc compiler they use under Windows isn't
new enough to support the newer standard. The C++17 version is slightly
more efficient in terms of how the predefined colors function, and might
actually be more efficient due to more constexpr
resolution. Still,
there's not any difference in actually using them.
Use namespace gfx to access GFX.
It is not necessary to explicitly include any of these, though if you're writing driver code you can include a subset to reduce compile times a little.
gfx_result
.pixel<>
type template.bitmap<>
type template.draw
, the main facilities of GFX.color<>
template, for C++17 or better compilers.color<>
template, for C++14 compilers.jpeg_image
used for loading JPEG
images.
GFX was designed using generic programming, which isn't common for code that targets little MCUs, but here it provides a number of benefits without many downsides, due to the way it's orchestrated.
For starters, nothing is binary coupled. You don't inherit from anything. If you tell GFX you support a method, all you need to do is implement that method. If you do not, a compile error will occur when GFX tries to use it.
The advantage of this is that methods can be inlined, templatized, and otherwise massaged in a way that simply cannot be done with a binary interface. They also don't require indirection in order to call them. The disadvantage is if that method never gets used, the compiler will never check the code in it beyond parsing it, but the only way that happens is if you implement methods that you then do not tell GFX you implemented, like asynchronous operations.
In a typical use of GFX, you will begin by declaring your types. Since
everything is a template basically, you need to instantiate concrete
types out of them before you can start to use them. The using
keyword
is great for this and I recommend it over typedef
since it's
templatizable and at least in my opinion, it's more readable.
You'll often need one for the driver, one for any type of bitmap you
wish to declare (you'll need different types for bitmaps with different
color models or bit depths, like RGB versus monochrome). Once you do
that, you'll want one for the color template, for each pixel type. At
the very least, you'll want one that matches the pixel_type of your
display device, such as using using lcd_color = gfx::color<typename lcd_type::pixel_type>;
which will let you refer to things like
lcd_color::antique_white
.
Once you've done that, almost everything else is handled using the
gfx:draw
class. Despite each function on the class declaring one or
more template parameters, the corresponding arguments are inferred from
the arguments passed to the function itself, so you should never need to
use <>
explicitly with draw
. Using draw, you can draw text, bitmaps,
lines and simple shapes.
Beyond that, you can also declare fonts, and bitmaps. These use
resources while held around, and can be passed as arguments to
draw::text<>()
and draw::bitmap<>()
respectively.
Images do not use resources directly except for some bookkeeping during loading. They are not loaded into memory and held around, but rather the caller is called back with small bitmaps that contain portions of the image which can then be drawn to any draw destination, like a display or another bitmap. This progressive loading is necessary since realistically, most machines GFX is designed for do not have the RAM to load a real world image all at once.
Let's dive into some code. The following draws a classic effect around the four edges of the screen in four different colors, with "ESP32 GFX Demo" in the center of the screen:
draw::filled_rectangle(lcd,(srect16)lcd.bounds(),lcd_color::white);
const font& f = Bm437_ATI_9x16_FON;
const char* text = "ESP32 GFX Demo";
srect16 text_rect = f.measure_text((ssize16)lcd.dimensions(),
text).bounds();
draw::text(lcd,text_rect.center((srect16)lcd.bounds()),text,f,lcd_color::dark_blue);
for(int i = 1;i<100;++i) {
// calculate our extents
srect16 r(i*(lcd_type::width/100.0),
i*(lcd_type::height/100.0),
lcd_type::width-i*(lcd_type::width/100.0)-1,
lcd_type::height-i*(lcd_type::height/100.0)-1);
// draw the four lines
draw::line(lcd,srect16(0,r.y1,r.x1,lcd_type::height-1),lcd_color::light_blue);
draw::line(lcd,srect16(r.x2,0,lcd_type::width-1,r.y2),lcd_color::hot_pink);
draw::line(lcd,srect16(0,r.y2,r.x1,0),lcd_color::pale_green);
draw::line(lcd,srect16(lcd_type::width-1,r.y1,r.x2,lcd_type::height-1),lcd_color::yellow);
// the ESP32 wdt will get tickled
// unless we do this:
vTaskDelay(1);
}
The first thing is the screen gets filled with white by drawing a white
rectangle over the entire screen. Note that draw sources and targets
report their bounds as unsigned rectangles, but draw
typically takes
signed rectangles. That's nothing an explicit cast can't solve, and we
do that as we need above.
After that, we declare a reference to a font
we included as a header
file. The header file was generated from a old Windows 3.1 .FON file
using the fontgen tool that ships with GFX. GFX can also load them
into memory from a stream like a file rather than embedding them in the
binary as a header. Each has advantages and disadvantages. The header is
less flexible, but allows you to store the font as program memory rather
than keeping it on the heap.
Now we declare a string literal to display, which isn't exciting,
followed by something a little more interesting. We're measuring the
text we're about to display so that we can center it. Keep in mind that
measuring text requires an initial ssize16
that indicates the total
area the font has to work with, which allows for things like wrapping
text that gets too long. Essentially measure text takes this size and
returns a size that is shrunk down to the minimum required to hold the
text at the given font. We then get the bounds()
of the returned size
to give us a bounding rectangle. Note that we call center()
on this
rectangle when we go to draw::text<>()
.
After that, we draw 396 lines in total, around the edges of the display, such as to create a moire effect around the edges of the screen. Each set of lines is anchored to its own corner and drawn in its own color.
Compare the performance of line drawing with GFX to other libraries. You'll be pleasantly surprised. The further from 45 degrees (or otherwise perfectly diagonal) a line is, the faster it draws - at least on most devices - with horizontal and vertical lines being the fastest.
Let's try it again - or at least something similar - this time using
double buffering on a supporting target, like an SSD1306 display. Note
that suspend<>()
and resume<>()
can be called regardless of the draw
destination, but they will report gfx::gfx_result::not_supported
on
targets that are not double buffered. You don't have to care that much
about that, because the draws will still work, unbuffered. Anyway,
here's the code:
draw::filled_rectangle(lcd,(srect16)lcd.bounds(),lcd_color::black);
const font& f = Bm437_Acer_VGA_8x8_FON;
const char* text = "ESP32 GFX";
srect16 text_rect = srect16(spoint16(0,0),
f.measure_text((ssize16)lcd.dimensions(),
text));
draw::text(lcd,text_rect.center((srect16)lcd.bounds()),text,f,lcd_color::white);
for(int i = 1;i<100;i+=10) {
draw::suspend(lcd);
// calculate our extents
srect16 r(i*(lcd_type::width/100.0),
i*(lcd_type::height/100.0),
lcd_type::width-i*(lcd_type::width/100.0)-1,
lcd_type::height-i*(lcd_type::height/100.0)-1);
draw::line(lcd,srect16(0,r.y1,r.x1,lcd_type::height-1),lcd_color::white);
draw::line(lcd,srect16(r.x2,0,lcd_type::width-1,r.y2),lcd_color::white);
draw::line(lcd,srect16(0,r.y2,r.x1,0),lcd_color::white);
draw::line(lcd,srect16(lcd_type::width-1,r.y1,r.x2,lcd_type::height-1),lcd_color::white);
draw::resume(lcd);
vTaskDelay(1);
}
Other than some minor differences, mostly because we're working with a
much smaller display that is monochrome, it's the same code as before
with one major difference - the presence of suspend<>()
and
resume<>()
calls. Once suspend is called, further draws aren't
displayed until resume is called. The calls should balance, such that to
resume a display you must call resume the same number of times that you
call suspend. This allows you to have subroutines which suspend and
resume their own draws without messing up your code. GFX in fact, uses
suspend and resume on supporting devices as it draws individual
elements. The main reason you have it is so you can extend the scope
across several drawing operations.
The refresh rate of this class of displays is extremely slow. However, GFX does not distinguish between e-paper displays and traditional TFT/LCD/OLED displays in terms of how it uses them. Therefore, in order to achieve reasonable performance, it's important to suspend and resume entire frames at a time. Animation is out of the question for these displays. Some of these displays support partial updating which in theory will improve their refresh rates. However, the displays are not well documented and I haven't been successful in getting that to work yet.
Since adding polygon support, I suppose an example of that will be helpful. Here it is in practice: