red / REP

Red Enhancement Process
BSD 3-Clause "New" or "Revised" License
11 stars 4 forks source link

Thought dump on events scheduling #157

Open hiiamboris opened 10 months ago

hiiamboris commented 10 months ago

Coming from https://github.com/red/red/issues/4881 , https://github.com/red/red/issues/4206 and my experimental scheduler to work around these.

Q: Why we need a custom event scheduler? A: Because it's the only way to make our GUI responsive in cross-platform way (see https://github.com/red/red/issues/4881 for some platform quirks). What I'm proposing: we don't process each incoming event right away, but in a loop: [fetch the remaining application event queue, process one event, repeat...]. Then we will have consistent control over events prioritization.

Types of events:

  1. Input 1.1. Keyboard 1.2. Pointer (all events with pointing device offsets) - over, wheel, drag, clicks 1.3. Menu access 1.4. Window resizing/moving
  2. Timer
  3. Drawing
  4. Synthetic - focus/unfocus, select, change, close, maybe click variants

Events density varies by platform, but we can expect rates:

With such event rates, and with OSes having very simplistic scheduling logic, it's easy for an interpreted program to block itself since it's very common for computations to take a 10-100ms time slice. Drawing a single complex layout (or a high ppi image) may even take over a second in worst cases.

Considerations:

Can't be built on top of the current do-events

Simply because there's currently in Red no way to just keep the event and later process it: we can only process or drop it. At the time of processing there's no knowledge about the queue ahead.

Also unclear how the OS does its part: is it possible to stop OS e.g. from activating a clicked button or entering a char into field, and later to let it do that when we're ready? If so, is it possible in cross-platform way?

Also see https://github.com/red/red/issues/5377

External vs synthesized

External:

Synthetic(4) are by their nature synthesized, whether by us or the OS.

We may want the ability to synthesize any of these events on our own, simply by putting them into the queue. If so, such events must be put directly after the currently processed event, not at the end of the queue. For example, if down event synthesizes a new click or dbl-click event, we want it next, not after some other key or over events. Or if Tab key generates a focus event, next queued key event must go into the newly focused face, thus after focus gets processed.

Grouping vs dropping, and event order

To keep up with the time when swamped with events we have to skip some.

By 'dropping' I mean deciding to skip event without having the info about further pending events. It is only correct to 'drop' timer events, because we know there will be more, as long as timer is periodic. Each timer's frequency must be considered separately, so we are likely to drop fast timers (e.g. those at 100fps) but not slow ones (e.g. once per minute). Which means we must know each event timer's rate.

By 'grouping' I mean looking ahead in the queue if event of the same type is pending. Then current event may be skipped. Grouping requires an event queue, while dropping - only event history.

Some pointer(1.2) (over, wheel, drag), sizing(1.4) and drawing(3) events may be grouped, but not dropped, because we always want to know the final point in the group of such events, e.g. to not miss away? condition and to not have visible misalignments on a static screen (when no more such events come in a second or more), and to not forget to draw the latest GUI state while possibly skipping intermediates.

When grouping, we cannot disturb the event order: if we have over key over queue, we cannot skip first over event, because then during key processing it will have a wrong pointer location (which it may want to use). We can only group ordered event with the next ordered event. E.g. over time over, because time is unordered.

Time(2) and drawing(3) are the only unordered event types. All other events cannot be looked past while grouping.

Synchronous vs asynchronous

When we generate an event, do we just place it in the queue or expect immediate processing?

Take set-focus as an example. Do we expect on-focus and on-unfocus events to be evaluated before set-focus returns (sync) or not (async)?

In sync mode:

In async mode:

When focusing is external (e.g. by clicks), OS may(?) already draw the decoration frame, while our event processing may not reach that state, and we will in both cases have slight discrepancy between what user momentarily sees as focused face and what face really gets the keys.

Drawing is another related tricky area:

Other synthesized events may be: on-click (coming from on-down), on-enter (coming from on-key), on-change (when triggered by Red code).

Another tricky consideration - some OS functions when called, may immediately call the window function with e.g. a drawing event, and expect drawing to be finished, not postponed. They may also rely on the results of such drawing (what is invalidated what isn't and so on).

Calls to API like GetKeyState will return keyboard state based on what window function has processed so far, so if we're queuing and not processing events immediately, we can't rely on such functions outside the queuing subroutine and have to maintain our own keyboard state array (if it's needed).

Accumulation

wheel event carries not the state itself but change in the state, so when current event is grouped its offset must be added to the next event.

Prioritizing

How do we decide whether we should group (or drop) the current event or process it?

As a realistic foundation for event prioritization we can define acceptable delay norms for each event class: timer, drawing, pointer-related (e.g. 500ms, 200ms, 100ms respectively), so that predicted UX harm from delaying the event depends on its class. Such norms must be configurable so each app can tune them for its specific needs. Another option is to measure the time it takes to process each event in each class and take exponentially average time as the delay norm, thus automatically processing more fast events and less slow events (such measurement is complicated a bit if some events are processed within other events).

We want to look ahead in the queue to see if there's another event to group with. And behind to see when event of this type (or of this class?) was last processed.

Since we can only group ordered events across unordered, the only possible groupable event queues are:

  1. ordered ordered(same type) ...
  2. unordered unordered(same type) ...
  3. ordered unordered+ ordered(same type) ...
  4. unordered ordered+ unordered(same type) ...
  5. unordered unordered+(other type) unordered(same type) ...

Queues (1) and (2) are unconditionally groupable, because next processed event type is determined and we know we're late to process both events so we process only one.

Queues (3)-(5) have competing event classes: current class class1 and next other class in the queue class2. We may skip and event that "can wait" if there's an "urgent" event ahead, but we don't skip an "urgent" event if ahead is an event that can wait.

Simplest prioritization model would track time elapsed since last event of each class, and compare delay-to-norm ratio for class1 and class2: if ratio1 >= ratio2 event is processed, otherwise grouped. This should work 99% of the time in practice.

But the most complex case in our model is a queue like over time drawing over time drawing ... where all 3 classes are interleaved (unlikely but possible, esp. if we synthesize events). With e.g. delay norms over=100 time=1000 drawing=50 and equal event processing time of 100ms, we can do 10 events per second, which we would like to allocate as 0-1 for time, 3 for over, 6-7 for drawing. While the simple model will likely give us equal 4-5 for both over and drawing, because over doesn't "see" drawing class ahead. For this case we may want to extend the model to compare delay-to-norms in all three classes each time.

This still works only for the asynchronous model, where we finish previous event before processing next. I've no idea how to properly prioritize events that are reentrant. Maybe we could just turn a blind eye to inner events if we have them.

Per-app, per-window, per-face, per-space queues

Though we have a central event receiver (window function), we don't want to skip events in one window in favor of events in another window, otherwise their performance may become skewed. We want fair CPU time distribution between windows, faces, spaces. If only one has high load - fine, let it consume most of the time, but if there are more significant time consumers, we want them to be equal. This gets trickier with spaces support, as it's not a native component View knows about, but a logical one.

So ideally we want to be able to create more event queues and post synthesized events there for automatic prioritization. E.g. face gets own queue out of the box, then produces new events for the queues it created for each space in it. Then we group events only inside each single queue, but choose which queue to process right now either on a round-robin basis or by comparing which queue has the most urgent event next.

This also relates to possible window-less timers we may want in Red.

Capturing, bubbling

Another angle to consider: if an event during its lifetime visits multiple logical or physical widgets, each having its own event queue, how does this affect our grouping logic?

To complicate further, not all events belong strictly to a single widget. For example, when we click to select some text or list/grid items, and we move the pointer out of the viewport of that text/list/grid, we want viewport to start scrolling in pointer direction, then outer viewport, and so on. So both child and some or all of its parents here react to the same dragging event. If we process such event for one widget we must also process it for other widgets (and asap), or it will become a mess to track.

And if an event is blocked on capturing stage, should it be accounted for by the priority algorithm?

Esoteric event pipelines

If I understand Windows design correctly, sizing(1.4) and menu(1.3) events go through a very tricky pipeline: we process an initial event and then call DefWindowProc which may block normal event processing for a very long time. DefWindowProc seems to use some undocumented kernel internals to draw non-client frame for us, and calls our event function all the time with only specific (sizing/moving/menu) event and timer (maybe also drawing?), ignoring the others (e.g. keys seem to be ignored, unless it's a View issue).

Problems with this:

I'm not familiar with quirks of other OSes, there may be more special cases.

greggirwin commented 10 months ago

Great evaluation and thoughts. Performance wise, we should consider core/thread assignment, and see to leverage that when possible. This seems like a nice fit for a state machine, with tracking what events are grouped or discarded, and when, alongside their consumption. I remember, long ago, reading about how QNX's real-time Photon system worked, but the details are long gone from my brain. With a real-time/interactive view of things, it is expected that some things will have to be given up in order to meet time slicing requirements.