Closed fredizzimo closed 4 years ago
I realize that the above is a little bit too long and abstract, but I just want to inform that I'm still working on this and thinking about it. I have also decided to change things a little bit. Rather than having one big effect system it will consist of three parts.
I have changed my mind, and will write more about the synchronization part. It's a complex problem, and I'm moving back and forth between different ideas in my head, and it's therefore I have trouble focusing on the more simple problem of making the animation system. So it would be good if someone else have some comments or other ideas.
When I'm talking about the synchronization, I'm actually talking about two different things
The problems are very similar so I think they could be handled in the same way.
You might wonder why we need number 1, so let me explain that first. We need to be able to run the normal keyboard scan loop at a very fast rate, preferably at a rate of more than 1000 times per second. The problem is that when you are doing rich visualization that's no longer possible, for example drawing to an LCD screen takes at least on order of magnitude longer.
Simple visualization, like updating LEDs is probably fine though, and could be handled by drawing directly from the keyframe animation system. So the goal of the base keyframe animation system is still to be able to run on the smaller AVR based keyboards, so the memory and CPU requirements for that should be similar to the existing RGB Led system.But for the rich simulation I think we can assume that we have a quite powerful processor, at least something like a Teensy 3.0, but the additional memory of a Teensy 3.1 could definitely help.
Because the actual rendering takes so long, we need to do it from another thread that have lower priority than the main scan loop, and is run every time the main loop is waiting for something. But at the same time we need to be able to control what should be drawn from the main loop, since it makes things so much easier to write, if you can enable or disable something directly in the keymap when something happens.
Something like uGUI could quite a bit with this, since the main loop could control what widgets and windows should be visible, and also the contents of them. The renderer would just render that state. The problem is the synchronization. And I don't see many other alternatives than adding some sort of double buffering to the system, which would mean quite heavily modifying the uGUI code.
This would work OK if the renderer is completely stateless by itself, and just reads values from the windows and widgets. Animation can also be supported this way, but you would need special widgets telling what kind of animation should be renderered, and all it's current parameters, like the current fade value, the current color and so on.
Certain animations could be hard to implement that way though. For example a cross-fade which would work like this, render the end of the previous frame with an alpha of 1-t, followed by the next frame with an alpha of t, where t is a value that goes from 0 to 1 over time (note that by scaling the values this can be done using integer math). But this means that we need three set of parameters, one for the previous frame, one for the current frame and one for the next frame. So things could quickly get complex. At least compared to the normal case where it perhaps would be enough to have some kind of union that combines the parameters for different animation types, including one enum that tells which kind frame it should render.
One workaround for the cross-fade could be to use three different widgets, but in that case there need to be a way to reference the other widgets, and it doesn't feel like a good idea.
The alternative would be to let the renderer have it's own state that it can write to. But then there's another problem, if it want to animate the same properties that the main thread controls, then its hard to determine which version it should use, so I haven't found a clean way of doing that.
There's also a completely different way of doing the synchronization. And that's by using events. All changes are sent as events with parameters, which the renderer then processes in the same order as it's sent. For a single application this might work, but when we send things over some sort of physical link things get more complex, especially if you can disconnect and re-connect the link at any time. Then we still need a way to send the whole state. So therefore I haven't been looking that much into that option.
So that was the shortest overview of the synchronization that I could think of. I would really appreciate if you have some other ideas for the problems presented, but I understand if you don't, especially since my description of the problems probably isn't clear enough.
I just realized that a cross-fade can be implemented simply by duplicating the original window/widget, and then creating a new one (or just showing a hidden one) on top of it containing the following frame. Then it's just a matter of having the right alpha blending mode and adjusting their respective alphas. So it seems like even this could be controlled by just adjusting the windows and parameters.
Alpha blending is not supported by uGUI currently though, but it would be fairly simple to add. We might also need more control over the window z order.
What's the status on this? It would be awesome to have this for the Ergodox Infinity LCD displays.
I'm sorry, I haven't been able to make much progress with this for a while. There are several different causes
Now I hope I can get back to QMK, but I won't give any promises of when this will be done. The good thing is that a few people have got the old visualizer system at least partly working, and I would be very happy if someone could spend some time and make that a bit more official until we have the new system.
Edit: As described in #1122, I will enable the old visualizer support first.
We'd still love to see progress on this, but I know @fredizzimo has been pretty busy with stuff - I'm going to close this for now, but if anyone is interested in discussing/working on this more, we can reopen it.
What is the Effect System
The "Effect System is" a new library that I'm proposing, it's an improved version of the "Visualizer", which is a library for visualising stuff on the keyboard. It can control the backlight, standard keyboard LEDs, key LEDs, LCD screens, and basically anything that can be controlled and attached to a keyboard.
I have previously called it visualizer, because everything it controlled on the Infinity Ergodox was visual. However I don't see why the same system could not control the audio system for example as well.
It could of course also be used to control things that are currently not available on keyboards. Maybe some future keyboards will have some kind of vibration, which would make the keyboard vibrate, which you could feel with your fingertips. It might not be as crazy as it sounds, one use case that I could think of would be to indicate spelling errors.
Therefore instead of calling it the "Visualizer", I will call it the "Effect System". I don't particularly like the name, but at least it describes what it does. So feel free to propose a better name.
How the current Visualizer works
The best way to get a feel of the Visualizer, is probably to look at the example that is included in my TMK fork of Infinity Ergodox. I have tried to comment it quite generously, because it's also meant to act as the main documentation for it.
But in short it's a system to define keyframe animations, which can be started and stopped based on what's going on with the keyboard. These frames can have any duration, including a duration of zero, which makes them instant. If they do have a duration, they furthermore have a function that can be updated at regular intervals, so you can do fading for example, but also much more advanced effects.
On the Infinity Erogodox, each half runs the exact same code, and the idea is that by acting on the same input, the halves will stay in sync, which is also mostly the case in practice.
The input is called
visualizer_state_t
in the code, and contains things like the active layer, the keyboard suspend status and the standard keyboard LED(caps lock, num lock and so on) states. The input is synchronized to other physical devices using the "Serial Link" library.On the Infinity Ergodox, the visualizer runs in it's own thread, so things that takes a relatively long time, like drawing to the LCD don't have any effect on the normal keyboard loop. However for keyboards with slower processors, and less advanced visualization, it would be quite easy to do the same thing every scan loop instead.
The problems with the current Visualizer
While the Visualizer perhaps does its job for simple things, I don't particularly like the solution. There are several problems.
The
visualizer_state_t
struct is defined on the library side, and contains way too few things. For example at the moment it lacks the information of which keys have been pressed. Standard things could quite easily be added, but then we need to figure out all the possible things that a user might need.Furthermore, since the definition is on the library side, keymaps can't really add their own custom stuff. It would be possible for keymaps to replicate the functionality of the library, and define their own remote objects for the extra stuff, but that's not really a nice way of doing it.
visualizer_state_t
struct. This makes some things much harder than it really should be, for example just starting an animation from a keyboard macro is hard. If we assume that number 1. is fixed, and you can add custom fields, then you need to add a variable representing what you want to do and check for that in the visualizer code. Then that variable has to somehow be reset, which isn't really possible without huge hacks.How the Effect System would work and solves those problems
Note The numbers here does not represent the same numbers in the problem list. Instead I'm just listing the key technical design. It also mostly contains differences from the current Visualizer. So things not listed here will most likely stay almost the same.
visualizer_state_t
, each effect have each own parameters. These parameters are given when starting the effect, but it should also be possible to change them during the effect is playing.These effect parameters are regularly synchronized over the serial link, along with the list of which effects are playing and the starting time of the effect. If we additionally synchronize the times on all the devices, then all effects should be synchronized on all devices.
In the case of Ergodox Infinity the target is the slave devices and if we are using the same system for the host commands described in #692, then the host would be the PC and the target would be the attached keyboard.
You start and stop the effects and update their parameters using regular functions. And they can be called at any time. These functions take all the parameters needed. So the usage would be pretty much like you would use the current RGB light effects for example
rgblight_effect_breathing(55)
, but these will probably be namedstart
,stop
, andset
instead. In fact at some point we should probably convert those functions to use the effect system.This should make it very easy to integrate into existing keymaps. It's also easy to implement empty versions of these functions, when support for the hardware is disabled. Or in the case of the Ergodox, where the Infinity would support more kinds of effects than the EZ for example. But in both cases the keymap can freely call the functions. I think the linker will be smart enough to see that these functions are not used, and optimize the actual call away, but that has to be tested.
Reducing the amount of possible mistakes is something that I don't have any solution for in C. We might be able to use some clever macros to work around things, but it would pretty type unsafe, with horrible casts everywhere, and still many errors would most likely only be detected at runtime. Keep in mind, that the rest of the system designed here will put more requirements on defining the effects, than what the current Visualizer does.
But I'm personally very confident in C++, and its template metaprogramming capabilities. So I'm pretty sure that I can make something that is easy to use even for people that don't know C++. And in any case the C++ would only be required for declaring the full effects/animations, and maybe for the combiner that is described in 5. Keymaps would still call regular C functions for starting, stopping and updating the effects, like described in 3. Finally the actual keyframe functions would also be written as regular C functions.
There should also not be any runtime overhead, as this system won't need to C++ runtime library. In fact the memory requirements would probably be less, since using C++ would allow me to reserve exactly the amount of memory needed for both the ROM and RAM, all statically at compile time.
Once the C++ implementation is done, we could have another look, and see if it could be translated to C, perhaps with less feature, so people have a choice to use that if needed.
The drawbacks of uGFX
The current visualizer uses the uGFX library for drawing to the LCD screen. It also exposes the LEDs as a virtual screen that lets you access individual pixels, but also call more high level things like circle drawing functions, or even text drawing.
I found problems with the uGFX however. It's quite bloated and slow. Many of the drawing functions repeatedly calls things through function pointers for each pixel. Those low level pixel drawing functions also require you to do branching and calculation for each pixel that's going to be drawn.
Another problem is that it doesn't handle different colour spaces very well. For example on the Ergodox Infinity, the LCD screen is black and white, the LED's are represented as a grayscale image and the RGB backlight is RGB and so on.
I also think we really should use a frame buffer model, at least logically, since many effects would need to read from the screen. So doing this in the memory, rather than going to the hardware would make things considerably faster. Yes, it will use more memory, but with modern microprocessors like the Teensies, I don't think it's a problem. And for keyboards with smaller processor using just LEDs shouldn't add much memory, 100 LEDs with 8 bit gray scale is just 8 bytes. Finally, you don't need to use the image abstraction layer, you could also access the hardware directly from the keyframe effects.
Unfortunately it's very hard to find alternatives, I have spent quite much time searching and I have only really found one alternative. You would think it would be easy to find a software rendering library that output to RAM, but no it isn't. There are completely bloated ones like Cairo, but it wouldn't be easy to integrate, would use too much memory, and uses floating points, which is a very bad idea for the controllers that we are using.
So the only alternative I have found is uGUI. It's not perfect, for example it can't use different colour formats per display either. The drawpixel function is not inlined in this model either, which makes its slower than it needs to be. Also I don't see any functions for reading pixels.
But since uGUI is so simple, those problems would be quite easy to solve by modifying the code. Perhaps compiling the same code many times with different macro definitons for different pixel formats. C++ and templates could also be used for that, but then we push the C++ requirements to the keyframe functions as well.
I'm very much open for suggestions of other libraries as well.
What do you think?
I realise that this is a quite long proposal, but still I hope that some of you have time to read it and comment. Note that I have left out a lot of details for two reasons, to keep this reasonably short, and to not tie the implementation too much. I like to let the final technical design take shape as I'm building the system through unit tests.
I think I will start working on this as soon as tomorrow, but you don't need to hurry with the comments, things can always change until we have the final implementation.