Closed dhalbert closed 4 years ago
For a different and interesting approach to asynchronous processing, see @bboser's https://github.com/bboser/eventio for a highly constrained way of using async / await
, especially the README
and https://github.com/bboser/eventio/tree/master/doc. Perhaps some combination of these makes sense.
I'm unqualified at this point to talk about implementation, but from an end user perspective I like the idea of this abstraction quite a bit. It feels both like a way to shortcut some ad hoc polling loop logic that I suspect people duplicate a lot (and often badly), and also something that could be relatively friendly to people who came up on high-level languages in other contexts.
People aren't going to stop wanting interrupts / parallelism, but this answers a lot of practical use cases.
I like event queues and I agree they are quite easy to understand and use, however, I'd like to point out a couple of down sides for them, so that we have more to discuss.
That's all I can think of at the moment.
Hm, I do not fully understand, why such a MessageQueue model should be easier to understand than callbacks. Maybe it's, because I am used to callbacks ;-) What is so special with your target customers, that you think, they do not understand callbacks?
I think, you have to invest much more brain in managing a couple of MessageQueues for different types of events (ethernet, i2c, timer, exceptions, spi, .....) or one MessageQueue, where you have to distinguish between different types of events, than in implementing one callback for each type of event ant pass it to a built in callback-handler.
def byte_reader(byte):
deal_with_the(byte)
uart = busio.UART(board.TX, board.RX, baudrate=115200, byte_reader)
I like the idea of message queues but I'm not convinced that they're any easier to understand than interrupt handers. Rather I think that conceptually interrupt handlers/callbacks are relatively easy to understand but understanding how to work with their constraints is where it gets a bit more challenging. Message queues are a good way of implementing the "get the operable data out of the hander and work on it in the main loop" solution to the constraints of interrupt handlers but as @deshipu pointed out, there are still good reasons to need to put some logic in the handler. Maybe both?
Similarly I like how eventio works but I think it's even more confusing than understanding and learning to work with the constraints of interrupt handlers. That in mind, it's tackling concurrency in a way that I think might be more relatable to someone who came to concurrency from the "why can't I blink two leds at once" angle.
One thing I was wondering about is what a bouncy button would do to a message queue. Ironically I think overflow might actually be somewhat useful in this case as if the queue was short enough you'd possibly lose the events for a number of bounces (but not all of them unless your queue was len=1. I'll have to ponder this one further). With a longer queue you could easily write a debouncer by looking for a time delta between events above a threshold.
No matter how you slice it, concurrency is a step beyond the basics of programming and I don't think any particular approach is going to allow us to avoid that. It seems to me that we're being a bit focused choosing a solution to a set of requirements that we don't have a firm grasp on yet. I think it's worth taking the time to understand who the users of this solution are and what their requirements are.
See #1415 for an async/await example.
What is so special with your target customers, that you think, they do not understand callbacks?
The problem is not with the callback mechanism itself, but in the constraint that MicroPython has that you can't allocate memory inside a callback. This is made much more complex than necessary by the fact that Python is a high level language with automatic memory management, that lets you forget about memory allocation most of the time, so it's not really obvious what operations can be used in a callback, and how to work around the ones that can't.
One solution would be to enable MICROPY_ENABLE_SCHEDULER
and only allow soft IRQ's, running the callback inline with the VM. This would prevent people from shooting themselves in the foot.
Refs:
In my implementation of interupts I’ve added a boolean “fast” that defaults to false and controls running the handler via the scheduler (no constraints on allocation) or directly in cases where latency is critical.
I am also considering running the gc automatically in the eventio loop but have not yet considered all potential side effects. Ditto permitting interrupt handlers is straightforward in eventio, for cases when they are needed.
Bernhard
On Sat, Dec 22, 2018 at 02:27 Noralf Trønnes notifications@github.com wrote:
One solution would be to enable MICROPY_ENABLE_SCHEDULER and only allow soft IRQ's, running the callback inline with the VM. This would prevent people from shooting themselves in the foot.
Refs:
- stm32:Handle_EXTI_Irq http://elixir.tronnes.org/circuitpython/4.0.0-alpha.4/source/ports/stm32/extint.c#L520
- esp32:machine_timer_isr http://elixir.tronnes.org/circuitpython/4.0.0-alpha.4/source/ports/esp32/machine_timer.c#L99
- py/vm.c http://elixir.tronnes.org/circuitpython/4.0.0-alpha.4/source/py/vm.c#L1297
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/adafruit/circuitpython/issues/1380#issuecomment-449534913, or mute the thread https://github.com/notifications/unsubscribe-auth/AC3bpFgD_0IZDU1HT4_V5XkegYEUsY5zks5u7YqOgaJpZM4ZFtp- .
Thank you all for your thoughts and trials on this. I'll follow up in the near future but am deep in Bluetooth at the moment. The soft interrupts idea and the simplified event loop / async / await stuff is very interesting. I think we can make some progress on this.
From my experience with what I think is the CircuitPython audience (I teach technology to designers), I don't think message queues are easier to understand than other approaches, and are probably harder in many cases. As @siddacious says, concurrency takes a while for newcomers to wrap their heads around no matter what the method.
I also think it's important to distinguish between event driven needs and parallelism. In my experience, the most common need amongst my students is doing multiple things at once, e.g. fading two LEDs at different rates, and perhaps doing this while polling a distance sensor. This requirement is different from the straw man example above.
Some possible directions:
I have done some more thinking and studying on this topic. In particular, I've read (well, read the front part; skimmed a lot more) Using Asyncio in Python 3, and I've read about curio and trio (github), which is even simpler than curio, started by @njsmith. Trio emphasizes "structured concurrency". I also re-reviewed @bboser 's https://github.com/bboser/eventio, and @notro 's example of a simple async
/await
system in #1415. This is also reminiscent of MakeCode's multiple top-level loops.
Also I had some thoughts about some very simple syntax for an event-loop system I thought might be called when
. Here are some strawman doodlings, which are a lot like eventio
or @notro's proposal. I am not thinking about doing asynchronous stream or network I/O here, but about handling timed events in a clean way, and about handling interrupts or other async events. Not shown below could be some kind of event queue handler, which would be similar to the interrupt handler.
Maybe the functions below need to be async
? Not sure; I need to understand things further. I'm more interested in the style than the details right now.
Note that that when.interval()
subsumes even doing await time.sleep(1.0)
or similar: it's built in to the kind of when
.
I am pretty excited about this when
/eventio
/notro-event-loop/trio/MakeCode model, as opposed to asyncio
, which is very complex. asyncio
started from a goal of handling tons of network I/O, and is also partly a toolkit for writing concurrency packages, as Caleb Hattingh (author of the asyncio
book above) points out.
# A cute name
import when
import board, digitalio
d1 = digitalio.DigitalInOut(board.D1)
d1.switch_to_output()
d2 = digitalio.DigitalInOut(board.D2)
d2.switch_to_output()
d3_interrupt = digitalio.Interrupt(board.D3, change=digitalio.Interrupt.RISIING)
#################################
# Decorator style of using `when`
#Starts at 0.0 seconds, runs every 1.0 seconds
@when.interval(d1, interval=1.0)
def blink1(pin):
pin.value = not pin.value
# Starts at 0.5 seconds, runs every 1.0 seconds
@when.interval(d2, interval=1.0, start_at=0.5)
def blink2(pin):
pin.value = not pin.value
# This is a soft interrupt. The actual interrupt will set a flag or queue an event.
@when.interrupt(d3_interupt)
def d3_interrupt_handler(interrupt):
print("interrupted")
# Start an event loop with all the decorated functions above.
when.run()
####################################
# Programmatic style of using `when`
def toggle_d1():
d1.value = not d1.value
def toggle_d1():
d2.value = not d2.value
when.interval(toggle_d1, interval=1.0)
when.interval(toggle_d2, interval=1.0, start_at=0.5)
def d3_interrupt_handler():
print("interrupted")
when.interrupt(d3_interrupt_handler, d3_interupt)
when.run()
I started a thread in the trio forum: https://trio.discourse.group/t/python-for-microcontrollers-and-structured-concurrency/154
For comparison, here is how you would do it with some kind of async framework (let's call it "suddenly"):
import suddenly
import digitalio
async def blink1(pin):
pin.switch_to_output()
while True:
pin.value = not pin.value
await suddenly.sleep(1)
async def blink2(pin):
await suddenly.sleep(0.5)
await blink1(pin)
async def interrupt(pin):
while True:
await pin.change(digitalio.Interrupt.RISING)
print("interrupted")
suddenly.start(blink1(digitalio.DigitalInOut(board.D1)))
suddenly.start(blink2(digitalio.DigitalInOut(board.D2)))
suddenly.start(interrupt(digitalio.DigitalInOut(board.D3)))
suddenly.run()
or a shorter:
suddenly.run(
blink1(digitalio.DigitalInOut(board.D1)),
blink2(digitalio.DigitalInOut(board.D2)),
interrupt(digitalio.DigitalInOut(board.D3)),
)
@deshipu Right, right, yes, we're talking about the same thing! I left out the async
s and await
s, or propose they might be hidden by the when
mechanism. I'm trying to come up with strawman pseudocode before getting into the details. I freely admit there may be mistakes in my thinking here, but I'm trying not to get sucked into the details of an existing paradigm yet.
trio and curio use async with
as a style. I'll doodle with the same style:
import when
# similar defs as in previous comment ...
# ...
# I am deliberately leaving out the `async`s because I want to understand we actually need them and when we don't. How much can we hide in the library?
with when.loop() as loop:
loop.interval(blink1, 1.0)
loop.interval(blink2, 1.0, start_at=0.5)
loop.interrupt(some_function)
loop.event(event_handler, queue=some_event_queue)
loop.done_after(60.0) # stop loop after 60 seconds
loop.done(some_predicate_function)
# ^^ Runs the loop until done.
# in general:
# loop.something(function_name_or_lambda, args=(), more args if necessary)
It's not possible to "hide" the async keyword in the library, because then you create a function that is being invoked when you "call" it. With async, "call" will simply produce an iterator object, which the library can then exhaust in its main loop, handling any Futures it gets from it along the way.
I think that syntax makes a very big difference for beginners, and that the "callback" style that you propose is very difficult to grasp for people not used to it. With the async style syntax, you basically write each function as if it was the only function in your program (you can test it as the only function), and then add the async to it and await to all parts that block, and it just works.
@deshipu Thank you for the enlightenment. I'll rework the examples with async
. I think we still might be able to avoid explicit await
s in some cases. I like the interval()
style, which pushes the timing into when
instead of making it part of the function. But maybe that is too much of a toy example.
@deshipu But I am seeing trio use what you call the "callback" style:
https://trio.readthedocs.io/en/latest/tutorial.html#okay-let-s-see-something-cool-already
Notice child1
, not child1()
, below, etc.
There are other examples where the args are separated from the function, e.g. start(fn, arg)
.
Do you have an example of an await/async library that uses your style?
Trimmed example from link above:
async def child1():
# ...
async def child2():
# ...
async def parent():
print("parent: started!")
async with trio.open_nursery() as nursery:
nursery.start_soon(child1)
nursery.start_soon(child2)
trio.run(parent)
Here is a very simple implementatin of such an async framwork, that can only await on a sleep function:
import time
TASKS = []
class Task:
def __init__(self, when, coro):
self.coro = coro
self.when = when
def sleep(seconds):
return [seconds]
def start(*awaitables, delay=0):
now = time.monotonic()
for awaitable in awaitables:
TASKS.append(Task(now + delay, awaitable))
def run(*awaitables):
start(*awaitables)
while TASKS:
now = time.monotonic()
for task in TASKS:
if now >= task.when:
try:
seconds = next(task.coro)
except StopIteration:
TASKS.remove(task)
else:
task.when = now + seconds
# async def test():
def test1():
for i in range(10):
print(i)
# await sleep(1)
yield from sleep(1)
def test2():
yield from sleep(0.5)
yield from test1()
run(test1(), test2())
This presentation explains the trampoline trick that it uses: https://www.youtube.com/watch?v=MCs5OvhV9S4
As for examples, asyncio uses that style: https://docs.python.org/3/library/asyncio.html
Of course in a proper implementation you would use a priority queue for tasks that are delayed, and a select()
(with a timeout equal to the time for the next item in the priority queue) for tasks that are blocked on input/output, such as interrupts, reading, or writing.
I mocked up a simple framework that lets you sleep and wait for a pin change (uses polling internally): https://github.com/deshipu/meanwhile
@dhalbert I think your "when" proposal has promise. I like the simplicity of it. And compared to other approaches, I think moving the interval outside the def is better because it makes the method more reusable and cleaner. From my perspective, the use of "callbacks" is not that hard for people to understand, whereas the use of await/yield requires explaining cooperative multitasking etc.
I'm interested to hear how your approach would handle terminating an interval. And did you consider giving an interval an optional number of repeats? E.g.
when.interval(toggle_d2, interval=1.0, start_at=0.5, repeats=20)
Since with async the interval is just a loop with a delay, you can easily control the number of repeats:
async def blink1(pin, interval, start_at, repeats):
await meanwhile.sleep(start_at)
for repeat in range(repeats):
pin.value = not pin.value
await meanwhile.sleep(interval)
meanwhile.run(blink1(pin_d2, 1.0, 0.5, 20))
I added a mock of a hypothetical implementation of an async framework if we had the select
call available (just for file operations for now, but for sercoms or interrupts it would be similar): https://github.com/deshipu/meanwhile/blob/master/meanwhile_select.py
@deshipu
Since with async the interval is just a loop with a delay, you can easily control the number of repeats
Sure, but for beginning coders, I think keeping the common aspects of these intervals out of the function definitions makes them simpler to write and understand. E.g. this is boiled down to the essence of the task:
def toggle_d1():
d1.value = not d1.value
and leaves all the bookkeeping for delay, start time, and repeats to the library. For my students, this simplification would be very helpful in their gaining confidence and trying out new things like cooperative multitasking. Over time, they'll then develop a greater understanding and be able to add more complex features.
I would like to have arguments, and I wonder if @dhalbert's when approach will permit it? E.g.:
def toggle(pin):
pin.value = not pin.value
when.interval(toggle(pin), interval=1.0, start_at=0.5, repeats=20)
@pvanallen in my limited experience, it's really difficult to teach people this style of programming (where the inside of the loop is in a separate function from the rest of the code), and it results in code that is difficult to follow. You can of course do it with async if you really hate yourself:
def blink_inner(pin):
pin.value = not pin.value
async def blink1(pin, interval, start_at, repeats, func=blink_inner):
await meanwhile.sleep(start_at)
for repeat in range(repeats):
func(pin)
await meanwhile.sleep(interval)
meanwhile.run(blink1(pin_d2, 1.0, 0.5, 20, blink_inner))
Come to think of it, it should be trivial to make a decorator function that would add what blink1
does in the above example to any function.
Sorry to come to this late. As context I am a long time embedded control developer who has written a couple of low level preemptive tasker/schedulers (easier than you might guess) and I've least read several versions of UNIX/Linux schedulers. I used to be up on current theory, including studying how an OS scheduler problem (priority inversion) killed the Pathfinder Mars lander, but that was a while ago. I just wanted to add one thing. Usually concurrency is implemented with a small number of core tools (message queue, resource locks, semaphores ... choose one) then more complex abstractions are built on top of them. I suggest breaking the problem into two parts, 1) a basic core abstraction for internal implementation, and 2) User visible interfaces (API) which use the core abstraction to do the heavy lifting. A good choice of core abstraction gives you a simple reliable building block to implement different user interfaces. It doesn't have to be simple to understand. It may be that only developers see it. It just has to be reliable. Simplicity is the job of the user API.
This issue came up in a discussion with @tannewt during PyCon19. Only skimmed the comments here, it looks like no one has mentioned the simple scheduler/event loop that is part of the CPython standard library and may be of interest for this:
I have ported the Trio core to MicroPython. No interrupt support yet, because lack of time, but adding that shouldn't be too difficult.
Interrupt handling would most likely look like this:
import trio
async def main():
p = Pin(…)
async for evt in p.interrupt():
print("Hey, pin changed:", evt)
trio.run(main)
I'm coming to this late too but I guess I don't understand the cons to asyncio; for me it's simple, well documented, well supported and used widely for Python. It requires few language additions and the implementation is available as a python module. Asyncio, like the message queue example, is also synchronous - it provides concurrency but not parallelism - so is relatively straightforward to reason through when issues arise.
I use it in many of my MicroPython applications and I'm yet to see a more straightforward way to achieve concurrency. The lack of asyncio on CircuitPython is one of the stumbling blocks to me using the platform.
It seems especially odd to avoid since one of CircuitPython's goals is CPython compatibility...
I do encourage folks to review Peter Hinch's extensive work on asyncio in MicroPython. There's a lot of useful information in there as to how asyncio can be used effectively on an embedded platform.
I'd also suggest supporting interrupts and threads (both are supported on MicroPython and are nicely orthogonal to asyncio) but they're actually less critical to me than asyncio.
Now, it's possible I'm a little biased since I gave a talk on Asyncio on (Micro)Python at last years' PyCon AU. :)
@mattytrentini When you say asyncio, do you mean asyncio, or uasyncio? My impression from talking to Paul Sokolovsky a bit is that they have substantially different APIs, and in fact he strongly disagreed with a lot of choices that asyncio made. Also, the main asyncio devs say that they think Trio generally did things better and their goal is to incrementally convert asyncio into trio... (of course there are a lot of complications here so no-one knows yet how it will all play out).
Regarding the bigger issues in this thread: the simple @when
style works really well for describing simple behaviors – if the program you want to write is "once a second, blink a light", then you can't really beat a library that lets you write basically translate that sentence directly into Python. The downside of this kind of callback scheduling is that if you want to express more complex behavior, then you're stuck writing state machines by hand, and there's really no mechanism for composing together simple behaviors to make more complex ones. This makes it incredibly difficult to write more complex programs. That's the motivation for async/await and libraries like trio/asyncio – to let you use Python's normal mechanisms like functions, loops, etc. to describe complex, composable asynchronous behaviors. (If anyone's seen my pycon talk, this is why the twisted happy eyeballs code is so hard to understand – it's just a raw state machine with no abstraction mechanism.)
Or a simple, concrete example would be, in Nina's keynote at PyCon this year, she had a demo of making the lights flash through a cycle of several colors. To do that, she had to track the program state by hand, using code like color_pos = (color_pos + 1) % len(colors)
. When I say "raw state machine", that's the "state" I mean. OTOH if you can use regular Python tools, you can write something like:
async def cycle_lights(colors):
for color in itertools.cycle(colors):
await wait_for_button_press()
cpx.pixels.fill(color)
Now that color_pos = (color_pos + 1) % len(colors)
is still happening, but it's hidden away inside higher-level abstractions – for
and itertools.cycle
.
So I actually don't know what approach is better for CircuitPython. Is it more important to make it as easy as possible to get started? or is it more important to give kids a toolset that can grow with them?
From my perspective: yes you have that nice little @when.interval
which works well enough – except when it doesn't: after you learn how to do a simple blinkenlight, the next idea is to vary the blink frequency. Boom you're back at the start line and need to figure out how to do it manually anyway.
The point about async-anything is that you can open-code your state machines, which is much easier to understand than random global (or, if you're good, instance) state variables. Plus, you don't need to deal with threads and locks and concurrency bugs related thereto.
The point where trio gets a leg up on asyncio is where you need to stop doing things / do things differently. For instance, I need one thing to happen when the device that controls my door has MQTT connectivity – but it must do something entirely different when it doesn't, otherwise I'm locked out.
With trio this is dead easy, I just tell some simple keepalive code to cancel the MQTT task and I can be reasonably sure that the Trio runtime takes all my sub-tasks and handlers down cleanly. Problem solved. With asyncio? I do not want to go there ever again.
Paul is quite opinionated ;) - but in any case the uasyncio API follows asyncio reasonably closely. Certainly to most users of the API it will feel similar if not the same.
I also agree that you Trio folks have made some excellent decisions and some significant improvements. But I'd rather wait for it to be accepted through a PEP process before having people build on it (and the interface possibly changing as it goes through review). Asyncio is what we have, and it's still reasonably good.
While the @when
syntax is neat for those simple cases I feel it's not such an improvement to warrant the machinery it's hiding. Further, the asyncio equivalent isn't significantly worse IMO and, as you've alluded to, it provides tools that a user can grow into.
As for cancelling, yes, Trio gets this right. But we have workarounds in asyncio that, while not great, are pragmatic.
Interesting article about asyncio: https://www.roguelynn.com/words/asyncio-we-did-it-wrong/
Right. They even complain that they don't have Trio's "nursery" concept. (Well, they could use the "anyio" wrapper, which does provide the equivalent.)
It'd be interesting to rewrite the same code in Trio and compare the pitfalls.
When I read that article I found it quite pro-asyncio, despite the inflammatory title: "I do believe asyncio is quite user-friendly, but I did underestimate the inherit complexity concurrent programming brings."
In fact, I think it supports the stance that you should not try to build a simpler system - asynchronous code is difficult, as the author says: "whether you use asyncio, Twisted, Tornado, or Golang, Erlang, Haskell, whatever".
In fact, I think it supports the stance that you should not try to build a simpler system - asynchronous code is difficult
Sure, but it doesn't have to be that difficult.
Writing something like the Happy Eyeballs algorithm takes 50 lines in Trio but 500 in "native" asyncio. That should tell us something. As should the fact that the people responsible for asyncio plan to evolve it into a Trio-ish direction as quickly as possible.
Interesting pypi package related to our use of properties and async: https://async-property.readthedocs.io/en/latest/readme.html
In Trio, we teach beginners that await blah()
is a special kind of function call, that you use when calling special functions defined with async def
. We don't teach about await obj
syntax, bare coroutine objects, the awaitable protocol, or any of that. This means we can't use some clever tricks like async_property
, but I think the simplification is worth it. (It also makes it pretty easy for linters to detect when you forgot an await
, which is otherwise a really easy and confusing mistake to make.)
Problem is, CircuitPython already uses properties for all the things that take time that you probably want to be asynchronous, like reading values from sensors or sending data to displays.
You're going to have to break compatibility to make those async, though, one way or another.
I don't know what's best for your situation; I just wanted to point out some of the options and tradeoffs.
asyncio
is not hard to use, is standard and would help CircuitPython appeal even more to a particular set of users. It is hard to wrap your head around in an hour, and is not suitable for a person's "hello world" but anyone can learn the basic asyncio patterns; kids and adults alike. I would switch to MicroPython for uasyncio but for the CircuitPython device libraries I'm locked into.
Where I'm coming from: I have written asynchronous software in many languages and in coroutine, thread, coroutine+thread and ad-hoc state paradigms. Asynchronous code is difficult, but not providing a higher order abstraction of some sort to achieve it just makes life more difficult for those who require it. asyncio
has been the most pleasant experience I have had to date writing asynchronous code. It's powerful and its primitives are straightforward. That said I don't care particularly care how, but I'm eagerly watching for CircuitPython to offer a path to support syntax like sensor_value = await sensor.read()
so I can delete hundreds of lines of code. Please do consider prioritizing this issue.
Well, from my PoW trio's structured-programming abstractions (bounded scopes for tasks, deterministic cancellation and error propagation, and its avoidance of callbacks) avoid many pitfalls you tend to run into when writing async code "in the wild" and, frankly, "forces" me to write better and more concise programs.
A few random thoughts:
Some programmers will be coming from MakeCode. Something similar or with easy-to-explain differences would be cool.
Some beginning programmers (usually those with entrepreneurial spirit) use LiveCode, a HyperCard-like language. LiveCode uses event/message handlers. Events and messages are handled only during waits and the like.
New programmers and programmers new to a way to do this often get confused about when events and callbacks are handled. Or about the changing of shared variables.
Perhaps the implementation should move into asynchronous processing in a uniform way, setting the groundwork for such in CircuitPython.
thanks for all the good thoughts. we def want to add some support for this. what would be extremely helpful is if folks could post up usage cases where they need concurrancy/async/sleep/interrupts. pseudocode or descriptions - there's a lot of cases and we want to make sure we cover them :)
Concepts can build on those learned in beginning CP examples.
Events/callbacks An example is digitalio, used to create an abstraction around a pin. One can change whether it is an input or output and so on. Input can have a pull-up. Perhaps an input can also have functions to call on change events. The function is passed the object. The added functionality can be a model for making other classes that have events. It is what to do when. Such objects might be called event objects. They add the concept of event.
The concept of change can be generalized so some sort of happening. This would include receiving a line from a UART.
A timer event class would be very handy and expected. Building on above, a timer event is based on a happening. The details can be changed even in its event function. A simple timer class can be used to build other timers.
When are event functions allowed to be called?
A concept learned is that things happen when you tell the board to do them. So a function handle_events() might be handy. It can have optional parameters of how many to do, or whether to sleep for a while. Recursion can be limited; a simple concept might be that an event function cannot be called if it is busy. An alternative is to prevent handle_events() from actually handling any events when an event is active; that is, only one event can be executing. One possibility is to use time.sleep() with review as to whether that breaks some things. This means that all code sequences 'tween those are essentially critical sections, that is, synchronization is a cinch. You don't worry about it.
Alternatively, they can be called at any time. This requires added concepts and functions.
Events act like a simple function call, a concept learned. They are not executed in parallel, with lines interleaved or anything like that, a common point of confusion.
Events do not occur if the object no longer lives, however, if its event function is running, destruction is after it completes. No zombie events occur.
Little baby programs This is an alternative to events. (One might ponder allowing both.)
The concept of a program can be built-upon for running little threadies or sub-programs. A couple concepts borrowed are load and control-C ("keyboard interrupt"). The words "run" and "stop" can be readily applied. Good words might be found in beginning examples.
Perhaps any function or an object with a run method can be run. For the function it is much like the program is the the body of the function and everything in scope is like a built-in capability.
Times to allow switching can be explicitly provided or implied in time.sleep(). Taking turns or passing the speaking baton might be concepts already understood. An alternative is to allow switching at returns and loop ends. The running function stops when it returns.
This requires some synchronization concepts and functions. It might be a memory hog.
Use case...
LED control. The state of device is reflected in one or more LEDs. (I recently used a light-pipe to the an on-board LED in a quick build.) Besides color, the blink is controlled. That might be off, fast blink, slow blink, or warble. The blink adds a distinction that is color-blind friendly, red might be only fast blink. The rate can be off a little and a little jitter is OK.
These are some strawman thoughts about how to provide handling of asynchronous events in a simple way in CircuitPython. This was also discussed at some length in our weekly audio chat on Nov 12, 2018, starting at 1:05:36: https://youtu.be/FPqeLzMAFvA?t=3936.
Every time I look at the existing solutions I despair:
I don't think any of these are simple enough to expose to our target customers.
But I think there's a higher-level mechanism that would suit our needs and could be easily comprehensible to most users, and that's
Message Queues A message queue is just a sequence of objects, usually first-in-first-out. (There could be fancier variations, like priority queues.)
When an asynchronous event happens, the event handler (written in C) adds a message to a message queue when. The Python main program, which could be an event loop, processes these as it has time. It can check one or more queues for new messages, and pop messages off to process them. NO Python code ever runs asynchronously.
Examples:
When you want to process asynchronous events from some builtin object, you attach it to a message queue. That's all you have to do.
There are even already some Queue classes in regular Python that could serve as models: https://docs.python.org/3/library/queue.html
Some example strawman code is below. The method names are descriptive -- we'd have to do more thinking about the API and its names.
Or, for network packets:
For UART input:
Unpleasant details about queues and storage allocation:
It would be great if queues could just be potentially unbounded queues of arbitrary objects. But right now the MicroPython heap allocator is not re-entrant, so an interrupt handler or packet receiver, or some other async thing can't allocate the object it want to push on the queue. (That's why MicroPython has those restrictions on interrupt handlers.) The way around that is pre-allocate the queue storage, which also makes it bounded. Making it bounded also prevents queue overflow: if too many events happen before they're processed, events just get dropped (say either oldest or newest). So the queue creation would really be something like:
The whole idea here is that event processing takes place synchronously, in regular Python code, probably in some kind of event loop. But the queues take care of a lot of the event-loop bookkeeping.
If and when we have some kind of multiprocessing (threads or whatever), then we can have multiple event loops.