Open hiiamboris opened 10 months ago
Great evaluation and thoughts. Performance wise, we should consider core/thread assignment, and see to leverage that when possible. This seems like a nice fit for a state machine, with tracking what events are grouped or discarded, and when, alongside their consumption. I remember, long ago, reading about how QNX's real-time Photon system worked, but the details are long gone from my brain. With a real-time/interactive view of things, it is expected that some things will have to be given up in order to meet time slicing requirements.
Coming from https://github.com/red/red/issues/4881 , https://github.com/red/red/issues/4206 and my experimental scheduler to work around these.
Q: Why we need a custom event scheduler? A: Because it's the only way to make our GUI responsive in cross-platform way (see https://github.com/red/red/issues/4881 for some platform quirks). What I'm proposing: we don't process each incoming event right away, but in a loop: [fetch the remaining application event queue, process one event, repeat...]. Then we will have consistent control over events prioritization.
Types of events:
over
,wheel
,drag
, clicks 1.3. Menu access 1.4. Window resizing/movingfocus
/unfocus
,select
,change
,close
, maybeclick
variantsEvents density varies by platform, but we can expect rates:
on-key
andon-key-down
over
,wheel
,drag
,resizing
andmoving
events (while wheel itself does not trigger often, touch scrolling may simulate a lot of wheel events), probably same for touch events (pan
,rotate
,zoom
) when they become supportedWith such event rates, and with OSes having very simplistic scheduling logic, it's easy for an interpreted program to block itself since it's very common for computations to take a 10-100ms time slice. Drawing a single complex layout (or a high ppi image) may even take over a second in worst cases.
Considerations:
Can't be built on top of the current
do-events
Simply because there's currently in Red no way to just keep the event and later process it: we can only process or drop it. At the time of processing there's no knowledge about the queue ahead.
Also unclear how the OS does its part: is it possible to stop OS e.g. from activating a clicked button or entering a char into field, and later to let it do that when we're ready? If so, is it possible in cross-platform way?
Also see https://github.com/red/red/issues/5377
External vs synthesized
External:
Synthetic(4) are by their nature synthesized, whether by us or the OS.
We may want the ability to synthesize any of these events on our own, simply by putting them into the queue. If so, such events must be put directly after the currently processed event, not at the end of the queue. For example, if
down
event synthesizes a newclick
ordbl-click
event, we want it next, not after some otherkey
orover
events. Or if Tab key generates afocus
event, next queued key event must go into the newly focused face, thus afterfocus
gets processed.Grouping vs dropping, and event order
To keep up with the time when swamped with events we have to skip some.
By 'dropping' I mean deciding to skip event without having the info about further pending events. It is only correct to 'drop' timer events, because we know there will be more, as long as timer is periodic. Each timer's frequency must be considered separately, so we are likely to drop fast timers (e.g. those at 100fps) but not slow ones (e.g. once per minute). Which means we must know each event timer's rate.
By 'grouping' I mean looking ahead in the queue if event of the same type is pending. Then current event may be skipped. Grouping requires an event queue, while dropping - only event history.
Some pointer(1.2) (
over
,wheel
,drag
), sizing(1.4) and drawing(3) events may be grouped, but not dropped, because we always want to know the final point in the group of such events, e.g. to not missaway?
condition and to not have visible misalignments on a static screen (when no more such events come in a second or more), and to not forget to draw the latest GUI state while possibly skipping intermediates.When grouping, we cannot disturb the event order: if we have
over key over
queue, we cannot skip firstover
event, because then duringkey
processing it will have a wrong pointer location (which it may want to use). We can only group ordered event with the next ordered event. E.g.over time over
, becausetime
is unordered.Time(2) and drawing(3) are the only unordered event types. All other events cannot be looked past while grouping.
Synchronous vs asynchronous
When we generate an event, do we just place it in the queue or expect immediate processing?
Take
set-focus
as an example. Do we expecton-focus
andon-unfocus
events to be evaluated beforeset-focus
returns (sync) or not (async)?In sync mode:
on-focus
moves focus into other face, which moves it back)In async mode:
set-focus
returns, and we check the focus, it will still be on the old face (untilon-focus
event evaluation), which may limit and/or complicates program logicWhen focusing is external (e.g. by clicks), OS may(?) already draw the decoration frame, while our event processing may not reach that state, and we will in both cases have slight discrepancy between what user momentarily sees as focused face and what face really gets the keys.
Drawing is another related tricky area:
Other synthesized events may be:
on-click
(coming fromon-down
),on-enter
(coming fromon-key
),on-change
(when triggered by Red code).Another tricky consideration - some OS functions when called, may immediately call the window function with e.g. a drawing event, and expect drawing to be finished, not postponed. They may also rely on the results of such drawing (what is invalidated what isn't and so on).
Calls to API like
GetKeyState
will return keyboard state based on what window function has processed so far, so if we're queuing and not processing events immediately, we can't rely on such functions outside the queuing subroutine and have to maintain our own keyboard state array (if it's needed).Accumulation
wheel
event carries not the state itself but change in the state, so when current event is grouped its offset must be added to the next event.Prioritizing
How do we decide whether we should group (or drop) the current event or process it?
As a realistic foundation for event prioritization we can define acceptable delay norms for each event class: timer, drawing, pointer-related (e.g. 500ms, 200ms, 100ms respectively), so that predicted UX harm from delaying the event depends on its class. Such norms must be configurable so each app can tune them for its specific needs. Another option is to measure the time it takes to process each event in each class and take exponentially average time as the delay norm, thus automatically processing more fast events and less slow events (such measurement is complicated a bit if some events are processed within other events).
We want to look ahead in the queue to see if there's another event to group with. And behind to see when event of this type (or of this class?) was last processed.
Since we can only group ordered events across unordered, the only possible groupable event queues are:
ordered ordered(same type) ...
unordered unordered(same type) ...
ordered unordered+ ordered(same type) ...
unordered ordered+ unordered(same type) ...
unordered unordered+(other type) unordered(same type) ...
Queues (1) and (2) are unconditionally groupable, because next processed event type is determined and we know we're late to process both events so we process only one.
Queues (3)-(5) have competing event classes: current class
class1
and next other class in the queueclass2
. We may skip and event that "can wait" if there's an "urgent" event ahead, but we don't skip an "urgent" event if ahead is an event that can wait.Simplest prioritization model would track time elapsed since last event of each class, and compare delay-to-norm ratio for
class1
andclass2
: ifratio1 >= ratio2
event is processed, otherwise grouped. This should work 99% of the time in practice.But the most complex case in our model is a queue like
over time drawing over time drawing ...
where all 3 classes are interleaved (unlikely but possible, esp. if we synthesize events). With e.g. delay normsover=100 time=1000 drawing=50
and equal event processing time of100ms
, we can do 10 events per second, which we would like to allocate as0-1
fortime
,3
forover
,6-7
fordrawing
. While the simple model will likely give us equal4-5
for bothover
anddrawing
, becauseover
doesn't "see"drawing
class ahead. For this case we may want to extend the model to compare delay-to-norms in all three classes each time.This still works only for the asynchronous model, where we finish previous event before processing next. I've no idea how to properly prioritize events that are reentrant. Maybe we could just turn a blind eye to inner events if we have them.
Per-app, per-window, per-face, per-space queues
Though we have a central event receiver (window function), we don't want to skip events in one window in favor of events in another window, otherwise their performance may become skewed. We want fair CPU time distribution between windows, faces, spaces. If only one has high load - fine, let it consume most of the time, but if there are more significant time consumers, we want them to be equal. This gets trickier with spaces support, as it's not a native component View knows about, but a logical one.
So ideally we want to be able to create more event queues and post synthesized events there for automatic prioritization. E.g. face gets own queue out of the box, then produces new events for the queues it created for each space in it. Then we group events only inside each single queue, but choose which queue to process right now either on a round-robin basis or by comparing which queue has the most urgent event next.
This also relates to possible window-less timers we may want in Red.
Capturing, bubbling
Another angle to consider: if an event during its lifetime visits multiple logical or physical widgets, each having its own event queue, how does this affect our grouping logic?
To complicate further, not all events belong strictly to a single widget. For example, when we click to select some text or list/grid items, and we move the pointer out of the viewport of that text/list/grid, we want viewport to start scrolling in pointer direction, then outer viewport, and so on. So both child and some or all of its parents here react to the same dragging event. If we process such event for one widget we must also process it for other widgets (and asap), or it will become a mess to track.
And if an event is blocked on capturing stage, should it be accounted for by the priority algorithm?
Esoteric event pipelines
If I understand Windows design correctly, sizing(1.4) and menu(1.3) events go through a very tricky pipeline: we process an initial event and then call DefWindowProc which may block normal event processing for a very long time. DefWindowProc seems to use some undocumented kernel internals to draw non-client frame for us, and calls our event function all the time with only specific (sizing/moving/menu) event and timer (maybe also drawing?), ignoring the others (e.g. keys seem to be ignored, unless it's a View issue).
Problems with this:
do-events/no-wait
, this singledo-events/no-wait
call may stop the loop for many seconds (stopping any background task processing), while actors and global handlers will still be evaluated. We want if possible to return immediately and continue the loop, letting it process incoming sizing/moving/menu events normally.I'm not familiar with quirks of other OSes, there may be more special cases.