Closed dhalbert closed 4 years ago
Another use case: MQTT. I control an LED strip via MQTT messages and the latency in handling the MQTT loop severally lowers the rate at which I can update the LEDS.
Use case...
Keypad handler. Keypad concepts include short tap, double tap, long press and shift. Sequences of taps and presses enabled by shifts (including the trivial single) trigger events that might be dynamically changed. Timing of button presses must be human. Debounce is handled. Events are allowed to change attributes.
Thanks for chiming in ladyada! Here is what I'm pining for every time I open my current project:
Here's a program that should swap 2 LED's back and forth lit/unlit:
button = adafruit_debouncer.Debouncer(digitalio.DigitalInOut(D1))
button.direction = digitalio.Direction.INPUT
led1 = digitalio.DigitalInOut(D2)
led2 = digitalio.DigitalInOut(D3)
led1.direction = digitalio.Direction.OUTPUT
led2.direction = digitalio.Direction.OUTPUT
led1.value = True
led2.value = False
# This is what we're going to do 2 different ways below:
def uncalled_equivalent_loop():
while True:
button.update() # debouncer
if button.fell:
led1.value = not led1.value
led2.value = not led2.value
# First way, with a program loop per coroutine.
def explicit_manual_coroutines():
# inner function just to keep the global namespace clean
# toggle_pin essentially becomes a top_level program loop.
async def toggle_pin( pin: digitalio.DigitalInOut, button: digitalio.DigitalInOut ) -> :
while True:
value = await button.on_change() # like rise/fall on Debounce
if value: # swap pin on fall. Could have just used on_fall() instead
pin.value = not pin.value
# Register the coroutines (program loop _facets_) with whatever coroutine impl
run_this_coroutine(toggle_pin(led1, button))
run_this_coroutine(toggle_pin(led2, button))
def quality_of_life_style():
# on_* functions _should_ absolutely expect async and non-async functions!
button.on_fall(lambda: led1.set_value(not led1.value))
button.on_fall(lambda: led2.set_value(not led2.value))
if __name__ == '__main__':
# start up with button pressed to use explicit manual coroutines
if button.value:
explicit_manual_coroutines()
else:
quality_of_life_style()
# Yields control to the coroutine runner, leaving it to sleep and wake async contexts as they are eligible and processor time is available.
# you do this now instead of `while True:` when you are building an async project.
async_coroutine_feature.run_forever()
I've not tested it but the intent should be clear. Both of the user async styles above should ideally be supported, though the 2nd one is much more convenient for typical programs (obvs you don't have to use a lambda, I'm just showing ideal compactness). This toy does not illustrate the high value of async/await coroutines but it captures the 2 ways I want to use them.
Today I hack around the lack of coroutines via ad-hoc state and loop()'s on all of my objects that pulse to the beat of the while True. This is pretty wasteful and costs hundreds of microseconds per loop, as almost every time around all of my objects' loop()s do nothing; but baking that knowledge into my root run loop lies on the way of madness =). This is what select() was made for!
One other thing that would help flesh out the possible implementations is a yield()
like for when you actually want a program loop and coroutines. You can make your program loop a coroutine like:
async def run():
while await yield():
# you need to yield or await to make room for other coroutines to run.
# if you don't await in this loop, you want to yield() each time around to let pending events resolve (or let other root loops get a turn on the processor or let your animation widget move forward a frame if it's currently registered to do it's thing because you pushed a button or[...])
do_your_program_loop_things()
[...]
if __name__ == '__main__':
run_this_coroutine(run())
# register other root coroutines to run
async_coroutine_feature.run_forever()
and you can have several of them to keep your features more cohesive (when it makes sense).
I'd like to comment, and hopefully improve on, this example:
async def toggle_pin( pin: digitalio.DigitalInOut, button: digitalio.DigitalInOut ) -> :
while True:
value = await button.on_change() # like rise/fall on a Debounce
if value: # swap pin on fall. Could have just used on_fall() instead
pin.value = not pin.value
run_this_coroutine(toggle_pin(led1, button))
async_coroutine_feature.run_forever()
I don't understand what you mean by "swap pin on fail".
Well … asyncio is going to deprecate doing this sort of thing and Trio never supported it. Better:
async def toggle_pin( pin: digitalio.DigitalInOut, button: digitalio.DigitalInOut ) -> :
while True:
value = await button.on_change()
if value:
pin.value = not pin.value
async def main():
await run_coroutine(toggle_pin(led1, button))
await run_coroutine(toggle_pin(led2, button2))
## what should happen if the main program falls off its end?
async_runner.run(main())
Presumably run_coroutine
returns an object we can then use to cancel/kill the coroutine.
Another improvement I'd make is that toggle_pin(led1, button))
looks like a function call but isn't, as it does not actually execute the coroutine – you need to pass it to a coroutine runner for anything to happen. It's more consistent to explicitly pass the procedure and its arguments, if any. Further advantage: If the procedure is wrapped, the wrapper will run when the runner actually enters the task, not when you call run_coroutine
. Thus,
async def main():
await run_coroutine(toggle_pin, led1, button)
await run_coroutine(toggle_pin, led2, button2)
## what should happen if the main program falls off its end?
async_runner.run(main)
Next, there's a neat "async for …" idiom we can use for polling the pin:
async def toggle_pin( pin: digitalio.DigitalInOut, button: digitalio.DigitalInOut ) -> :
async for value in button.on_change():
if value:
pin.value = not pin.value
though I wonder iwhether that loop shouldn't return a sequence of events instead, with common attributes ike timestamps or the event source, so that people can build more complex systems without adding all that by hand in mutiple places.
Further, fixing the "what should happen if the main program falls off its end?" problem requires some sort of task group. Trio calls them "nursery". The idea is that a task group ends when all the tasks that were started in it end (plus, if any task causes an exception, all the others are cancelled before the exception can propagate, thus you don't need to do your own task housekeeping):
async def main():
async with TaskGroup() as tg:
tg.start(toggle_pin, led1, button)
tg.start(toggle_pin, led2, button2)
# The 'async with' waits for the taks to end
I do need to get my (rudimentary) port of Trio to Micro/CircuitPython updated … while it's not too old to serve as a basic starting point, I didn't yet find the time to add actual interrupt code etc..
My workaround in CircuitPython is to create state machines. Objects are checked often to see if their states need to change or they need to call back.
My classes have functions called spin. (In my mind is an image of the magician adding spinning plates to his act while keeping the other plates spinning.) Each time spin is called it checks I/O and time.monotonic_ns() and changes state of the object accordingly and calls callbacks as needed. I use time.monotonic_ns() a lot. (I feel like Greg in Over the Garden Wall throwing candy everywhere, only it's mononotic_ns.) I try to make spin functions get in and out fast if they have nothing to do.
At the main level, I have a function called spin that calls each of the spin methods (often hard coded in). My wait function (like time.sleep()) calls spin and is used only at that level.
At the top is also my primary loop and either it or sub loops call spin a lot. Sometimes that is all my code does, build everything and then go into a loop around spin, though sometimes I put in a sleep or a gc.collect(). If I have several major states that are quite different, the code in the big loop would move among states, perhaps using wait in transitions, and then have a spin loop in the state.
My callbacks are not allowed to call spin. I can probably come up with a better rule, but that is a simple one.
If I need to, I sometimes tweak spin to balance attention to spin in subsystems.
Use case...
Motion control. Like the LED use case, but with motion.
(Back in October I made a hat for my costume. I was in a hurry and took some shortcuts in motion control software and in a few places made some sudden changes, fast but not too bad. As I was going out the door something broke and I quickly changed a servo motor to something else available, bigger but it could fit. We got to the event and I put my hat on, a gesture that enables it. Fortunately, nobody got hurt.)
I'm using a hacked kindle to make a picture frame that has a phone number, and it will display whatever you text to it... but only if it rhymes. A way to bring more doggerel into my life. An embedded-ish context that's maybe more like Raspberry Pi than CircuitPython, but it seems relevant to me?
Since folks have compiled vanilla Python 3.7 for it, I took it as a chance to use Trio for the first time, and I LOVED it. I'm a full time programmer, so I was getting all ready to use my usual stuff, state flags and callbacks and event objects and stuff, to handle polling for new texts and reading keypresses and having a few interactive modes. But despite handling several things at the same time, the code turned out mostly as while loops, where you just ask for input and then do something. It felt a lot more like writing my first interactive console number-guessing games and stuff, back in the day. It's just that there were two or three of those while loops happening at the same time sometimes. It was great! No flags or callbacks at all, basically the only "state variable" is the page number.
So I think Trio-style concurrency will be a much smaller leap for people who are starting out: you can keep the shape of your while loop mostly the same, you just have to adjust some of the lines. And then something else can happen at the same time. Super cool.
I'd like to comment, and hopefully improve on, this example:
Hi @smurfix! I super disagree with a lot of your take on my proposal - but please don't mistake my below frankness for anything other than direct, explicit communication for clear understanding ❤️ tl;dr: I believe it is almost uniformly a set of regressions in flexibility, conformity and cognitive burden.
I don't understand what you mean by "swap pin on fail".
You misread. fall
not fail
. In Debounce
it's called fell
though, so I also misremembered. I typed this out without looking at the api to remember its name 😅
async def toggle_pin( pin: digitalio.DigitalInOut, button: digitalio.DigitalInOut ) -> : while True: value = await button.on_change() if value: pin.value = not pin.value async def main(): await run_coroutine(toggle_pin(led1, button)) await run_coroutine(toggle_pin(led2, button2)) ## what should happen if the main program falls off its end? async_runner.run(main())
I don't want to prescribe which event handler gets invoked first. You've changed the semantic of the application in your main
. Also, toggle_pin
is already an async
method: An author should not need additional boilerplate to get its return value beyond await
. run_coroutine
should not exist as in this example unless you maybe intend to have multiple event loops? I think that would be overkill though...
This is a very limiting form that has syntactic overlap with my proposal but I would hope that this is not the way it is implemented at a language level as it adds burden to the writer without enabling meaningful alternative patterns.
Presumably
run_coroutine
returns an object we can then use to cancel/kill the coroutine.
run_coroutine
should instead be called async
and return an object (we'll call it an awaitable
) that you can await
to get its value. We can also simply affix it to our method definitions as you've continued to do and just drop run_coroutine
altogether.
Seems fine to instead let the code in the coroutine decide if it's done. Cancellation is not hard to achieve without forcing syntactic bloat at every call site; I would claim that external cancellation is a comparatively rare need. Ctrl+c is not conceptually problematic either: If you're running python, it's just however it currently does it but it also sets a flag to notify the underlying runner. If it's awaiting, you probably would want the implementation's framework to also be awaiting the interrupt signal and do the needful just the same...
If you're hung up on external cancellation, it seems fine to propose that an awaitable
should be "cancellable". I do disagree but not that strongly.
Another improvement I'd make is that
toggle_pin(led1, button))
looks like a function call but isn't, as it does not actually execute the coroutine
This is not fully correct and may be a reason for some of the misunderstanding: It absolutely is a function call and defines exactly how to call the named async function - in python's implementation this invocation's return value is called a coroutine
. An async
function returns something that is awaitable
(like a coroutine
). The language way to get the value out of an awaitable
is via the await
keyword that signals a resume pointcut for the scheduler. If you're bothered by the form of the invocation, remember that a keyword like await
sticking out after an =
looks like invalid syntax until you are exposed to it too.
– you need to pass it to a coroutine runner for anything to happen. It's more consistent to explicitly pass the procedure and its arguments, if any. Further advantage: If the procedure is wrapped, the wrapper will run when the runner actually enters the task, not when you call
run_coroutine
. Thus,async def main(): await run_coroutine(toggle_pin, led1, button) await run_coroutine(toggle_pin, led2, button2) ## what should happen if the main program falls off its end? async_runner.run(main)
Your run_coroutine
is a little confusing to me - why isn't is just a decorator you'd apply to toggle_pin
so you can just literally invoke the function you're trying to invoke by invoking it directly by name? You want to wrap the coroutine for some reason - usually in Python you'd use a decorator for that.
And If it's a decorator that requires a special kind of calling convention, why not instead go the standard python route and name that decorator async
?
And if it's named async
and has special calling convention requirements why not promote it to the function definition itself?
Wrapping asynchronous code under needless layers of indirection is a bad thing. In your example you still have the burdens of async
and await
, you've just added an extra hoop a person has to jump through. And it still misses the original point in the example of decoupled event handlers registered to the button's fall
(sorry, should have been fell
) event.
Next, there's a neat "async for …" idiom we can use for polling the pin:
async def toggle_pin( pin: digitalio.DigitalInOut, button: digitalio.DigitalInOut ) -> : async for value in button.on_change(): if value: pin.value = not pin.value
This pattern always feels perverse to me, bleeding implementation detail when an implementation of async/await leans on generators. More seriously, this reverses the direction of event flow at the event registration site (the async for line) and kind of requires the event listener to store an event queue. Gonna go gang-of-four for a sec here; my proposed model allows the Subject (on_change
) to control how it notifies the Observers (toggle_pin
). This lets you write simple subjects that yield 1 event as it happens or queue them up for delivery irrespective of Observers. Maybe I'm a closet functional nerd or something but this is just backwards imo. Also, it breaks the brain to try and reason about "what does StopIteration mean to button.on_change!?"
Further, fixing the "what should happen if the main program falls off its end?" problem requires some sort of task group.
There is no problem and no necessary solution beyond the elegant definition of today: If the program falls off the end it is done. Changing that semantic is not something I'd support. If async_coroutine_feature.run_forever()
returns, it has run forever or run everything that it will ever run.
If you have a set of tasks that will complete, when they complete they'll drop out of the coroutine runner's list. When that's empty "forever" is done as nothing will ever add something to it. (Remember how I said I don't want to write hardware events? This means nothing will ever add python code to be run when the event loop has no more user coroutines in some state)
async def main(): async with TaskGroup() as tg: tg.start(toggle_pin, led1, button) tg.start(toggle_pin, led2, button2) # The 'async with' waits for the taks to end
This is again fine modulo the start
function. It should take an awaitable
instead and therefore probably be called add
or associate
but naming things is hard. I'm not a huge fan of using the async __exit__
to drive coroutine execution, but if my proposal is held, you can add TaskGroup like this as a library and people can use it if they want to.
Most of what you propose (maybe all of it) is possible to implement on top of what I've proposed as a library, and that's where I'd propose to keep it. 👍
@WarriorOfWire Yes it's a reduction in flexibility. The point is that in the 60s, removing "goto" in favor of "for" and "while" loops was a reduction in flexibility too, and people complained for much the same reason you complain now. Surprise, Python doesn't have a "goto" and nobody misses it.
The concurrency equivalent of "goto" is to have a task/coroutine that you can fire off and forget. I want Python to not have that either, and for much the same reason. Sure it's more flexible, but the cost of that flexibility is that you need to remember to kill it off when you no longer need it, including when your code throws an exception. People are habitually bad at doing that consistently and correctly. The point of Trio-style structured concurrency is that the runtime does that for you, which only works if you tell it what should happen, which requires structured code. Yes, there is no way around it, but why would you need one in the first place?
(Actually, I lied, there is – a task group is an object you can pass around …)
I also don't quite understand your objection to "async for". The literal translation of async for value in source
is
while True:
value = await source.__anext__()
thus I can't see any reversal of control flow or anything.
Another improvement I'd make is that toggle_pin(led1, button)) looks like a function call but isn't, as it does not actually execute the coroutine
This is not fully correct and may be a reason for some of the misunderstanding: It absolutely is a function call
Well, of course it's a function call, but it doesn't actually execute any of the named code (with the exception of decorators). It just creates a coroutine object that you then need to either pass to your loop runtime (run it in parallel) or await
(run it serially and wait for it). Now what should happen if you don't do either of these? Any decorators run, along with their side effects, but the main code doesn't, which seldom is what I'd want.
NB: what should happen if the coro object is passed to your run_coroutine
but the code inside then raises an exception? How would you handle that situation if the coroutine in question is supposed to run indefinitely and thus won't be await
ed for?
@WarriorOfWire I like the elegance of your quality_of_life_style(). I would like to discuss that and, to that end, simplified your program and added detail with some assumptions.
import warriorofwire_async
import digitalio
import warriorofwire_debouncer
button = warriorofwire_debouncer.Debouncer(digitalio.DigitalInOut(D1))
button.direction = digitalio.Direction.INPUT # ?
led1 = digitalio.DigitalInOut(D2)
led2 = digitalio.DigitalInOut(D3)
led1.direction = digitalio.Direction.OUTPUT
led2.direction = digitalio.Direction.OUTPUT
led1.value = True
led2.value = False
# This is what we're going to do below:
def uncalled_equivalent_loop():
while True:
button.update() # debouncer
if button.fell:
led1.value = not led1.value
led2.value = not led2.value
# on_* functions expect functions
button.on_fall(lambda: led1.set_value(not led1.value))
button.on_fall(lambda: led2.set_value(not led2.value))
# Do this instead of `while True:` when you are building a warriorofwire_async project.
warriorofwire_async.run_forever()
I apologize for any undue distortion. I realize that this is a simple example and yet, given that, I note that this looks like callbacks. Either of us could readily create the module I renamed warriorofwire_async to handle callbacks. (I considered calling it darz_async to minimize any projection of something with your name on it that might offend, read it that way if need be.)
Though much of the rest assumes callbacks, this might well apply to coroutines in general. A callback version of the warriorofwire_async library can be a way to get ready for built-in callbacks based on primitive events.
This is a possible direction assuming built-in callbacks.
**on_** All classes with a value attribute should move toward having an on_valuechange() method even if it is timer polled underneath. Other on are allowed for any class. If there are no functions for an on* then overhead is minimized. Names should include present tense verbs of the right aspect (eg. "receive" but not "smell"). Though connecting multiple functions to an on* can be handy, it might be hard for me to use with my style of modularity; I'm fine without it. One advantage of only a single function is that it is easy to clear or reconfigure on the fly. In my discussion, I'll assume only one is allowed, though most of it should apply to multiple callbacks. One can create a wrapper to handle multiple, if it is needed.
built-in An example of a class that is built-in and has an on_value_change() method would be a revised DigitalInOut. Calling on_value_change create an internal mechanism that will cause the provided function to be called upon change. This would apply even when the direction is out. The function is called without any dynamic context. One might set it up like this:
my_pin.on_value_change(enable_doomsday_device)
timer An important new built-in class is the timer. A very simple one might be one that has a ns threshold attribute and the callback occurs when the equivalent of time.monotonic_ns() passes that threshold, that is, when time.monotonic_ns() > threshold becomes True, a change. The threshold can be atomically changed at any time. Fancy timers can be built from this.
In Python The revised Debounce would be written using timers and (when available) the on_change() method of is input. It should have a value attribute and thus an on_change() method, and can also have on_fall() and on_rise() methods. It is used just as one might use the digital io directly. All callbacks originate from the primitive callbacks.
Pluses and Minuses This method allows effective encapsulation. It does not require multiple stacks. Surrogates for the built-in callbacks can be created to try this out. It does require state-machine thinking in the Python code.
That doesn't always fit well. If, while doing all the other stuff, the code must make some Internet db queries to find the right admins and techs and then send an individual text or email to each one, it is easiest to describe that procedurally.
@WarriorOfWire Yes it's a reduction in flexibility. The point is that in the 60s, removing "goto" in favor of "for" and "while" loops was a reduction in flexibility too, and people complained for much the same reason you complain now. Surprise, Python doesn't have a "goto" and nobody misses it.
This false equivalence argument does not convince me. The challenges of concurrent software do not approximate the problem form of goto
, and a global task group instance would be essentially equivalent to what is CPython's global (well, threadlocal) "task group" - i.e., trio changes nothing at the language level and adds burden to developers which may be unwelcome.
I also don't quite understand your objection to "async for". The literal translation of
async for value in source
iswhile True: value = await source.__anext__()
thus I can't see any reversal of control flow or anything.
I want on_<event>
methods exposed by digitalio
and friends. That directly implies the Observer design pattern. Remember, Subjects update Observers. By making the observer next()
the subject, each observer of the button mutates the state of the button subject (or, again, requires the button subject to implement its notification channel as a replayable queue per-observer). It's very incorrect to do this way both logically and practically:
await button.on_press()
obviously means "next time the button is pressed await
is done.")for in
iterates each thing. If you have slow consumers and button does not implement a notification queue per observer you will not execute your loop for each press and you will have limited recourse. Instead, if an Observer wants to catch each event it must quickly memoize every event and handle it when it's time to handle it. E.g., button.on_press(lambda: self.add_one_press())
in the case of a listener that wants to move a cursor up a menu and not irritate the user by habitually missing distinct clicks due to layout latency. (displayio is another lib that'll need some async love)Another improvement I'd make is that toggle_pin(led1, button)) looks like a function call but isn't, as it does not actually execute the coroutine
This is not fully correct and may be a reason for some of the misunderstanding: It absolutely is a function call
Well, of course it's a function call, but it doesn't actually execute any of the named code (with the exception of decorators). It just creates a coroutine object that you then need to either pass to your loop runtime (run it in parallel) or
await
(run it serially and wait for it). Now what should happen if you don't do either of these? Any decorators run, along with their side effects, but the main code doesn't, which seldom is what I'd want.
Trio has literally exactly the same requirement. trio.plz_invoke_this(function, the, args, the: kwargs)
is an alternative calling convention for when you want to synchronously execute an async
method. library.run_coroutine(function(the, args, the: kwargs))
is no different - it's just standard form that other Python programmers already understand. Trio doesn't fix the "you didn't await that boi" problem either. Garbage in, garbage out is never more true than when writing slipshod concurrent software (as a slipshod developer, I do know what I'm asking for).
NB:
K, I'll answer both:
what should happen if the coro object is passed to your
run_coroutine
but the code inside then raises an exception?
Absolutely does not matter beyond defining a consistent semantic. Python has Exceptions, and exceptions bust upward through stack frames. If you don't catch it, they will reach the coroutine runner. It's reasonable for the runner to do one or more of these policies and other reasonable policies can be conceived of:
__main__
threw off the top (allowing displayio to print it and all that beautiful stuff CircuitPython does for us).print()
the exception info and drop the coroutine. Not very friendly; I'd rather end the program as above by default, and have a pluggable handler facility for when I just have to get crazy.How would you handle that situation if the coroutine in question is supposed to run indefinitely and thus won't be
await
ed for?
It is handled by either of the above policies. If it's the last coroutine and you are using the unfriendly print()
scheme, your program ends because you've waited "forever" and the coroutines are done. If you use the developer-friendly option (throw as from __main__
and halt), what happens when __main__
throws? It stops. If you use a custom exception handler, I'd hope that you could re-raise and get the developer-friendly version if your logic demanded it or equally print and continue or requeue a new coroutine or whatever you wanna do.
I apologize for any undue distortion. I realize that this is a simple example and yet, given that, I note that this looks like callbacks.
@Dar-Scott Sounds like you have a handle on what I'm after. In fact I have implemented callbacks for the Adafruit rotary encoder (link if you're curious). It's fine but only has the resolution of loop()
on account of missing coroutines in CircuitPython. Event sources like this ought to be able to optimistically barge ahead of a main loop, and if you had coroutines you could make entire applications reactively sourced from events. Which, again, opens doors for automatic deep sleep modes (imho a real sleeper of a feature 🥁).
Callbacks are just better with coroutines, though you obvs don't need them. I really just want to write event-oriented code for event-oriented applications and have my microcontroller take every advantage of applications structured that way!
Hello again everyone. I'm happy to see this discussion continuing in a spirited manner with a lot of back and forth, ideas proposed and assessed, and folks expressing their needs and wants.
As far as I can tell everyone is doing a good job of communicating well and even handedly, however I will none the less take this opportunity to remind everyone of the code of conduct for CircuitPython and its libraries:
https://github.com/adafruit/circuitpython/blob/master/CODE_OF_CONDUCT.md
This is not meant to suggest that anyone has violated the CoC; were that the case I would say so. It is merely a reminder that it exists and that all parties should hold themselves accountable to it in order to help keep discussions such as these welcoming to all.
Thanks everyone! With any luck 2020 will be the year of (among other things) concurrency for CircuitPython!
@WarriorOfWire I'm glad we seem to be on the same page.
A callback is one task. A coroutine is effectively a sequence of tasks. Yes, a callback can be implemented as a trivial coroutine, but not all of the coroutine overhead is needed.
Of the methods for handling event programming, these come to my mind for this discussion:
My gut feel (not to be trusted) says that the built-in primitive callbacks are the sweet spot for optimum use of a microcontroller.
This false equivalence argument does not convince me.
Well, it does convince me. I have written a lot of asyncio code which uniformly became shorter, more correct, and easier to understand when I rewrote it with Trio's/anyio's semantics.
A single global corotine runner / task group does nothing to keep exceptions local because the call stack that led to its invocation gets lost. As an example. take a producer/consumer pattern: if one dies, you want the other to be cancelled too, otherwise you get a deadlock. Yes you can do this manually, but it's much easier to write your code in a way that makes all of this happen automatically. This includes restarting a failing part of your code cleanly, without affecting all the others and without writing a single extra line of exception-handling code. You simply cannot do that with a single global taskgroup.
it's a shocking piece of code to read
It's exactly equivalent to the while True: value = await thing.on_change()
code promoted by you, simply adding a bit of syntactic sugar / a more structured way of handling a stream of events (take your pick) by removing one line of code. Any technical problem of the one is exposed equally by the other. There is no stack frame; this is an object with an __anext__
method, not a full async generator.
I strongly recommend reading https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/ and actually experimenting with some nontrivial Trio code vs. its native asyncio equivalent before making any opinionated statements about the uselessness of it all, let alone calling other people's code "shocking".
@WarriorOfWire:
Of the methods for handling event programming, these come to my mind for this discussion:
looping calling update() for all modules requesting it built-in primitive callbacks async/await and functions built with those
My gut feel (not to be trusted) says that the built-in primitive callbacks are the sweet spot for optimum use of a microcontroller.
The main problem I have with the first two is that you need to decompose long-running subtasks into state machines. People (myself included) are, ingeneral, pretty good with linear code that has a couple of await
ed calls sprinkled in (signalling points where the flow of control can be suspended/resumed, implying that places without an await
cannot be interrupted thusly) and in general pretty bad with keeping track of callback chains and state machines. (The assumption is that that interrupting your code between any two statements in order to run some callback is not an option. People are equally bad at getting their locking correct; deadlocks in a microcontroller are worse than on the desktop.)
One can easily import "primitive callbacks" into an async/await system; just teach the callback to trigger an event you're awaiting. Thus, if the default callback triggers an event (and signals an overrun if the previous one hasn't been await
ed yet) you get both – just override it if necessary.
Thank you everyone for your continuing comments. Just FYI, async
/await
are available since we have MicroPython as our base code, but they are not currently turned on. So we could build something on top of those.
@smurfix (That was my comment.) I recognize that async/await allows for easier and better expression of some asynchronous tasks. However...
I keep having to address running out of memory. I worry that a full async/await would make things a lot worse. (I have not tried it in MicroPython, so that is just my gut feel.)
Yes, the await can be a marker reminding the programmer that things can change.
The use of async/await might improve overall performance but it will hurt latency.
A problem with async/await for me is that it is a shock to beginning programmers, both in syntax and in concepts. Getting things to run can be frustrating.
I have pondered a little on what might be a simple approach to doing several things at once, easy concepts for beginners. I just made this up, so it might be quite flawed.
run One way to do to several things at once (in my proposed scheme) is to "run" several functions, as one might run several programs. This builds upon common concepts and borrows the term "run". The functions share the processor and might even communicate amongst themselves. They coexist, cooperate and collaborate; they might be called co-functions or coroutines. In this scheme, they are co- simply because they were run at the same time.
side-effects and extended side-effects Though, in general, we favor functions with no side-effects in programming, such as sin(), we know that often that is what we want, functions with effect, such as write(). If there are more than one co-function, some functions might have other effects than the ones in its purpose. That is, there might be effects that take place during the call, such as variables changed or output pins changed, that are done by another co-function that has been run. They might take longer than expected.
Just as a programmer takes care in using functions with side-effects, one takes care when using functions with extended effects. (Such functions are called "suspendable" in Kotlin, I think.) One can expect that there are no changes in variables shared among co-functions between calls to functions with extended effects.
A function has extended effects if it is a built-in function with extended effects or it can possibly call a function with extended effects. (It is not defined as such and a calling of it in a special way does not make it so, just as a function with side-effects does not have any special defining method or calling method.)
program The program is itself a co-function. Built-in functions that might take a while are now functions with extended effects, that is, they might let other co-functions do some work. If there are any. The behavior for the calling co-function is the same. Some new built-in functions might be needed. (The program co-function might even be implemented the same as other co-functions, or built-in functions know the context, it is not visible to the user.)
only run With this scheme, the only command introduced to the beginning programmer is "run". No new syntax is introduced. The concept of when extended effects might occur must eventually be understood, too, but that is not needed immediately.
Advanced programmers can build any of their favorite async functions from these.
This is going to sound egotistical, but I really do urge everyone coming to this issue from an asyncio or callbacks-oriented background to learn a bit about Trio. You may or may not end up liking it – when has there ever been a programming concept that everyone liked :-) – but it's genuinely a paradigm-shift compared to other approaches, so you kind of have to spend some time with it to "get" how it fits together, and without that there's a lot of talking past each other.
To convince you that it's worth your time, I'll say that we do frequently get responses like @tgs's post up-thread, and the asyncio maintainers and Java core team have both said that they think the Trio-style "structured concurrency" approach is the way of the future.
Some good starting points would be this talk/live demo, the tutorial, or the Notes on structured concurrency for a more theoretical take.
@njsmith I read your blog on Trio. You refer to the clear fact that "nurseries" have the same expressive power as "go statements." That is because Trio does not limit what goes inside of them. It neither prevents a user from stashing the result of a nursery expression's aenter(), e.g., on a package level variable nor does it eliminate the "go statement" (the apparent point of the library) even from the library's own feature set. The "nursery" construct is just one of many useful opt-in aids for writing clear, expressive software; callbacks and await statements are 2 other such expressive and useful aids.
The structure of "async with nurseryfactory() as nursery" that the blog post is set on is straightforward to implement in standard Python asyncio. Let's test that assertion with a 30 minute timeboxed starting point shall we... https://gist.github.com/WarriorOfWire/a5d15350c55cb3b2b61b74431e7cb484 You can draw that same picture around the School. I could have used gather() instead but 🤷♀ it seems like the issue at hand is more around the scoped lifecycle of the tasks and it's nearly bedtime.
I don't dispute the utility of such a structure. I've used this type of construct professionally for years. It's a good tool for rapidly giving easy-to-scan guarantees on batched parallel work like a web crawler, a directory scanner or a database ETL application. It's a square peg and there are many square holes. Quite a few are round though and Trio simply gets in the way of those.
Pretty sure by now I've made my stance clear while it has materialized over the past couple of days as it relates to the state of CircuitPython but let me summarize as I'll be unable to comment for several days going forward:
Here's a good video from pycon Australia to get you ramped up on coroutines on Python microcontrollers: https://www.youtube.com/watch?v=tIgu7q38bUw
Good luck CircuitPython maintainers! I can't wait to write tidy little coroutines on all of my Feathers!
It neither prevents a user from stashing the result of a nursery expression's
__aenter__()
, e.g., on a package level variable nor does it eliminate the "go statement" (the apparent point of the library) even from the library's own feature set.
You're missing the point. The point isn't that you may or may not save nurseries to some variable and pass them along to some other code; that doesn't violate any invariants. The point is that the nursery's __aexit__()
will block new tasks from being created in the nursery, wait for that nursery's existing tasks to finish, allows you to stop them all with one function call, and auto-cancels all other tasks when one of them raises an exception (and propagate that exception), without you having to write a single additional line of code.
Yes you can implement a toy version of that on top of asyncio in ten minutes, others have done so with a lot more effort (cf. anyio). So? That's not the point. You can implement for
and while
loops with goto
easily, too, but if the programmer is still allowed to freely use "goto" that doesn't buy you very much.
Frankly, I haven't found any of your round holes yet. After all, there's no functional difference between registering a callback and starting a subtask that loops on await object.next_change()
, and if you need a long-running non-scoped background task then nobody prevents you from creating a global taskgroup and starting your task there.
However, there is a conceptual difference in that any non-trivial program requires you to remember to un-register the callback or stop that non-scoped task yourself at some point, any exception the callback or global task raises must reach your program's top level and/or requires additional code to notify your "main" code, and its stack trace will not tell you how it got there (i.e. where it was registered / started). All three of those are not desireable from my POV, let alone from that of a beginner.
While I know that fire-and-forget is "easier" when all you want to write is a Blinkenlight equivalent, guess what happens when you later connect the Blinkenlight to MQTT and the network connection breaks? I'd rather spend ten minutes more up front to explain how taskgroups work ("This is how you do it." "That's complicated." "A bit, but it's just one extra line (the async with
that creates the taskgroup) and it works the same whenever you need it, so it's actually quite simple." "OK."), than, somewhat later, waste half a day teaching people how to correctly clean up after themselves – they've gotten used to doing everything the asyncio.create_task
way, and their code reflects that. Been there, done that. "Why does this crash?" "You didn't tell it to stop." "But why doesn't it just know that I don't need that any more?" "Because [computers are stupid, but instead I say] you didn't tell it to." "Computers are stupid." "…"
Registering a callback or starting a task with asyncio.create_task
may seem more natural for you, but that's because that's what you're used to. That's understandable, but it doesn't help beginners (or in fact experts) to write better code. Been there, done that …
Asyncio isn't "standard". It's a library that forces you to jump through quite a few hoops when you want to write correct programs. Trio isn't "standard" either, and it doesn't have additional hoops, just different ones – and when you're done your program actually requires fewer hoop jumps than with anything else. (I'm not just saying that – I rewrote a bunch of libraries with Trio. Guess what happened to the line count.)
One of the staple examples of Trio is the Happy Eyeballs algorithm, which requires 40 lines when you think in Trio's terms, but 400 when you stay with the asyncio mindset. That alone should tell us something about which way is "better".
I have to admit that I am impressed with the explicit cancellation scopes, but I wonder how hard it would be to actually implement this in CircuitPython. We don't have all the usual exception handling machinery that grown-up Python has, you can't even inspect an exception object inside Python code. The talk mentions that before 3.7 Trio had to do some magic with ctypes to rearrange the stack for the cancellation to work properly — obviously we can't do that in CircuitPython, but of course something could be added in C to handle it. What makes me nervous is the thought about how big it would be and how much extra code we would need to run for this. You know, CircuitPython doesn't even have await
and async
enabled yet, and in MicroPython await
is just an alias for yield from
. There is no async while
, async with
or async for
or any of that. Would we need to add all this, or could it work with just plain versions of those?
What do you mean, no async with
? The code below worked since February 2019. (async with/for
by itself was added in January 2016.)
… and no, there is no async while
. CPython doesn't have that either.
$ cat test-trio.py
#!/usr/bin/env micropython
import trio
async def bar(s,n=None):
try: await trio.sleep(s)
except: return
print("Slept",s)
if n: n.cancel_scope.cancel()
async def foo():
async with trio.open_nursery() as n:
n.start_soon(bar,0.1)
n.start_soon(bar,1.2,n)
n.start_soon(bar,2.3)
trio.run(foo)
$ upython test-trio.py
Slept 0.1
Slept 1.2
$
Works. Micropython master branch, thus I can only assume that it works with CircuitPython too, once you merge up to the current µPy -- version 5.0 is in beta, so that shouldn't take too long. Archives: https://github.com/smurfix/trio/tree/micro, https://github.com/smurfix/micropython-lib/tree/trio.
Cancellation isn't correct right now (the nursery doesn't catch the Cancelled exception; cancelling by itself does work, as the above code demonstrates; if you remove the except:-return
handler you get a traceback). But that looks like a minor problem. I need to port the code to the current Trio master anyway, this experiment is half a year old, and check what the problem is.
NB, I am not deeply enough into how micro/circuitpython is built to assess any code size questions. There's probably a fair bit of trimming that can be removed from Trio, and if we decide to do this then at some point we'll have to think about feature (im)parity and related matters.
Call stack hacking is/was necessary for Trio IIRC in order to collect the frame information so that a MultiError (the exception that's raised when more than one subtask of a nursery raises an exception at the same time) carries a reasonable stack dump. Micropython doesn't do things that way, so in the first version I just dropped this part.
I wasn't aware that you are already working on it. And looks like I wasn't up to date with the async status in MicroPython — I stopped following closely before it was implemented, and missed it, thanks for the correction.
No problem. I wasn't "working on it", strictly speaking; my branch is more like a proof-of-concept hack to discover whether µPy might be up to the task. Much would need to be done, preferably by people who actually know their way around Micro/CircuitPython (I don't – not yet anyway), to transform it into a useable ecosystem.
I think at the moment the most important question is what kind of functions need to be exposed to the Python side of things so that async libraries could be implemented in the first place. Right now I'm mostly thinking about a select-like call, that could wait on multiple events at once (files, uart/spi/i2c, gpio changes, timers, etc.). Without this, we are reduced to a busy loop with polling, which can be sufficient for a proof-of-concept for the API, but not really useful.
A Unix-style "select" actually isn't a good match for an async main loop. You don't want, or need, to shoehorn every feature that might conceivably wake up a task into a common select
call. It mostly-works for Unix because almost-everything is a file descriptor there, but no sentence that starts with "everything is a …" makes sense on µPy.
Here's how I would handle things:
trio.Event
object associated with the interrupt so that the task handling it will wake up. Replace the callback if you need fast reaction time. There's no async context here and your code really shouldn't raise an excepition, but you can wake up tasks.So that's the bottom-up layer. The top-down view is pretty straightforward:
handler.__anext__()
. This method calls self.event.wait()
, clears it (currently: by replacing it), clears and re-enables the interrupt, and returns whatever result it should generate.Disclaimer: yes I know that this is a bag of words and no code, much less code that proves the concept. We might want to discuss this further on Gitter or Discourse or somewhere else that's more suitable than a CircuitPython issue, before somebody actually starts coding.
@smurfix I think I'm missing something. How does Trio create a continuation?
use case...
I have a class that handles UART communication with an audio Bluetooth module. It has two levels of flow control, RTS/CTS and "OK". Messages belong to a particular channel and those have to be split apart on receiving and either queued or callback'd. Currently, a polling loop calls update() often. (Well, actually, it is named spin(), but I noticed that Debounce has update().) The function update() is expected to return quickly if it has nothing to do. I have goofed and let code callback to a function that indirectly calls update() and I have had to create some rules and a uniform way of doing things.
@Dar-Scott It doesn't. CPython uses fancy syntax around generator functions and calls them coroutines. µPy does the same thing. A coroutine is not a continuation as that word is usually used, because it's not a copyable object.
To explain: Python only allows you to continue a coroutine (you call its send
method) and you get a value and some state back (at the next point the coro uses yield
(not yield from
) which is where you continue with the next send
). This is not a symmetric relationship; the coroutine always returns to the caller of that send
, and you can't copy a coroutine state to, for instance, retry with sending B if it crashed when you continued it with sending A.
In contrast, "real" continuations are symmetric. Calling into a continuation typically involves creation of a new continuation which the code you just called can use to get back to you. The magic that does this is named "call-cc" in Lisp ("call-with-current-continuation"). Using that primitive makes for truly mind-bending[ly simple/crazy/???] code, plus you can (indeed must) have code that looks like procedure calls but which never ever returns, but Python can't do that, thus we tend not to call it "continuation".
All Python coroutines are created by simply calling an async function without "await". You can then use coro.send(x)
to send some X into the coroutine which essentially becomes the return value of the yield
which the coroutine last suspended itself with.
Asyncio allows you to create coroutines yourself, and it's OK with sending anything at all to and from them. With Trio you don't do that; you create a task by calling nursery.start(async_fn)
so that the nursery can call async_fn()
itself in order to manage the new task for you. Also, the messages it sends are strictly Trio-internal types so that there's no ambiguity about the task's state (the task runner must know exactly what the task is waiting for, and the nursery must wait for all its tasks to end before leaving its context).
I recognize we all have different notions of what is simple. I like the idea that CircuitPython implements something simple and the big boys can create from that tools they like. These are my notions of simple.
Moving functions around as data is sometimes hard to grok, but it seems to be introduced early and is an important concept. I think we can assume that concept, at least to the extent of modifying examples. (Coroutines, as the term is commonly used, can be confusing, because they are not really functions, but wrappers around functions.)
I have seen programmers get confused as to what is synchronous and what is asynchronous. In discussion, I find some interesting and understandable sources of confusion. Also, the words tend to focus on the wrong scope. This is the only time I use those words.
Perhaps adding this functionality should be done in a way consistent with the nature of teamwork for CircuitPython implementation. Built-in I/O objects should be able to be modified in any order and CircuitPython works along the way. New built-in I/O objects should be easy to design to be consistent with this functionality.
I see a couple simple ways to implement this.
callbacks This is the simplest to implement, but it requires adding methods to I/O functions (built-in and in Python) to connect callbacks. These can use on_* names. These are called with only static contexts, but exceptions work as usual. Just as in other uses the parameter is either a function name (variable) or a lambda. Scheduling is undefined. Or not?
An important question is "when do these occur?"
Between every simple statement is a possibility. This reduces latency. However, there is no guaranteed maximum latency. It requires use of single statement idioms for cooperation, but those can have wrappers. (There might be some clever way these can occur between iterations.)
Another possibility is for them to occur only when a certain function is called. This is similar to, but faster than, calling a function that calls update(). The value and available attributes do not have to be polled.
co-functions This is an alternative expressive approach. I just made up the name "co-function" (which might conflict with some usage) to make a distinction against coroutines. Here, co-functions are simply functions that are used in a certain way. No special labeling or wrapping is needed or desired. All of the concepts that one learns about functions in variables and about lambdas apply.
A function called (say) run() will start a co-function running. The parameter is the function. The function is passed just as any lambda or function might be. Any function at any time can call run(). There are no rules about what can do what when. (There might be some define shoulds.)
Borrowing a term from Kotlin, some functions are suspendable. (Spelled with an "a".) This allows execution to be shared, that is, execution might stop some other co-function gets to use execution. Some variables might change. The length of time in a suspendable might be longer than that expected for the simple functionality of the function.
Built-in classes with functions that currently block can make them suspendable; an example is reading from a UART. Some I/O built-in classes can have these functions added as prep before the transaction; an example is digital io. Maybe time.sleep() can be made suspendable or a new built-in class that waits until (say) monotonic crosses a value, can be added in the prep stage. Any function that can potentially call a suspendable function is suspendable; a naming convention might be handy (or burdensome and unreliable). If no co-function can be run the system spins until one can be run.
This approach means that code between suspendables will not see any surprise variable changes. That is, it is a "critical section". Also, software-based timing should work as usual. (An alternative that improves latency would be to drop the notion of suspendables and allow switching between simple statements, requiring use of single statement idioms for cooperation.)
What about calling a suspendable from the top level? It works as expected. That is, the top level is virtually a co-function. (And might be implemented as such.)
There are no surprises in context. Exceptions that go up out of a co-function end up at the top.
This does require maintaining some sort of continuation, a stack or something. This might be tricky and take up RAM.
Implementation might be in stages. The first is the prep where blocking functions are added to built-in I/O classes. The second is the addition of run and a single class with a suspendable function (say, a timer). The third and fourth are the conversion of all blocking functions to suspendables.
which I am OK with both of these. I am growing fonder of the later but recognize that it might have a big step in switching.
We might want to discuss this further on Gitter or Discourse or somewhere else that's more suitable than a CircuitPython issue, before somebody actually starts coding.
We are on CircuitPython's Discord all the time, and there are weekly meetings there for discussing more fleshed out ideas in a bigger group as well.
@ smurfix My ignorance is showing here. You seem to have stepped around what I was trying to ask and I suspect that is because it is obvious. How do you save where you are in Python? I can picture an ad hoc method that saves lambdas representing the rest of the work, but that means translating the code in a special way that does that, and that becomes interesting in compound statements. I think I am stuck on some 20th century concept.
@Dar-Scott This was done in Python initially only for the so-called "iterator generators", functions that instead of (or in addition to) return
use yield
. "Calling" such a function produces an iterator object, which executes until the first yield
, and then you can call next()
on that object to get it to execute to the next yield
and so on. Then a yield from
statement was added, which basically iterates over another iterator, and yields each value it gets from it.
This turned out to be powerful enough to implement a kind of co-routines. The async
keyword was added to make a function a special kind of iterator generator, and the yield from
got renamed to await
. If you are curious about a very simple example of how this works, you can look at my "meanwhile" library: https://github.com/deshipu/meanwhile
@Dar-Scott You don't. It's implicit when you create and use a generator. The "yield" which you use when executing the generator's code saves the call stack and returns a value to the caller, which then calls send
to get your generator to continue where it suspended itself.
There is no other way to create that call stack. The only way to "make something suspendable" is to convert the whole thing to async functions by liberally sprinkling "async" and "await" keywords onto your code. The only way to actually suspend something is to call "yield" in there, or rather await runtime.some_function()
which does it for you, and which manages the generator/coroutine side of the whole housekeeping.
You then need Trio or asyncio or @deshipu 's meanwhile
to manage the other side, i.e. the list of generators/oroutines which are runnable. This typically involves passing some magic object through that tells the runtime whether / when to resume a coroutine.
The (only, in Python) other way to make multiple things happen is to take some function or method, and telling CircuitPython "when this [interrupt] happens, call [that]". This way works for small programs ("when you press the button you turn on a light") but as things become more complex you want a normal call stack: the command interpreter reads lines, the line reader/editor reads characters, the character input needs to wait for the UART's next-character input – so, rather than forcing you to invert all that logic manually, which is a major source of hard-to-track bugs, I'd like Circuit/MicroPython to use Trio's abstractions, because they work very well IMHO, and hide all that complexity in a way that affords building complex projects without shooting yourself in the foot.
@smurfix Reading your approach with interrupts I can't help but note that this is exactly how the select
function needs to be implemented internally (keeping a list of references to objects that have something to show, and then returning it as soon as it's non-empty or there is a timeout), except it wouldn't use any library-specific objects internally, so that multiple different libraries could be implemented using it on the Python side. Sure, those references would need to be more than just file handles, as in Unix. I would really hate to lock users into just one "correct" way of doing things.
@deshipu The generator function is what I was missing.
Your meanwhile
is clean and simple. As is, it is reduced to polling, but allows a certain expressiveness is describing processes. It can be expanded to add priorities, or deadlines. Some sort of interrupt or callback might speed that up. The use of select() might work but it requires some notion of selectables.
I guess I was trying to add to CircuitPython rather than using Python (which includes user definable generator functions) in yielding.
Though I suppose there can be some magic in the compiler that translates a generator function to a bunch of lambdas, I suspect there has to be a continuation (of a sort) saved. That is, the yield has to get access to the Python stack frame and stuff it somewhere. So, implementing a generator is really hardly more complicated than adding primitive suspendables. The same core mechanism is required. Flipping that around, if the code is there to optionally implement generators, then it would be straightforward to implement primitive suspendables.
Please do not underestimate the level of experience that a "beginner" has when they start out with Circuitpython. Some of us have been programming with Python for several years, but have not yet ventured into some of the more advanced features. I would put myself in that category, having written quite a lot of robot control software in Python. I think I would be somewhere in the intermediate level, where I am just now starting to need some of the more advanced features Python offers. Some of us are no doubt true beginners, not having ever programmed before in any language. Also, clearly, there are those who are way more advanced than I am and who have used many of Python's more advanced features. We are all over the spectrum as far as our experience with programming, and may or may not have used Python.
I am currently working on software that will operate a small autonomous robot. It turns out that I am actually creating a sort of tool kit that can be used to construct code for many different robots. This is very much in the early stages and I am just now starting to look at breaking code into functions that can easily be used elsewhere. This current code can probably be called a type of polling, where a timer is checked and if it has reached a limit or overflowed, the task is run and then the timer is reset for the next interval. I do this all in Circuitpython.
For instance, I can already fire off tasks at a specified time interval that will interrupt the flow of my main loop to execute and then return control back to the main loop. I initialize timers for several tasks before the main loop starts and check for end count or overflow within my main loop for each task. So far, I have defined three different tasks that run at different intervals.
While this works well for what I am doing right now, I definitely see where I could make good use of some form of concurrency. Robots can easily have many different things that need to be going on at the same time. Distance sensors need to be checked to make sure the robot does not run into obstacles (moving or inanimate), wheel encoders need to be checked to sense whether a wheel has stalled and to check how far the robot has traveled, tilt sensors need to be checked to see if a robot is tilted more in a direction than what it can deal with, etc. You get the idea.
Right now, I am not sure how I am going to accomplish all of this with Circuitpython, but I am sure going to give it a good solid go. This may not belong in this thread, but I thought it would be good to have a context within which concurrency functions might be used. Perhaps this context might lead to a solid method of implementing concurrency within Circuitpython.
Yup, all of that is quite possible in vanilla CircuitPython, just a takes bunch of onerous state to maintain and careful registration of your sensors in loop()
(of course there are near-infinite ways to do this).
If you had asyncio from the usual Python library or something like existing clever MicroPython implementations you could organize each sensor as its own mini almost-realtime program and contribute observations to a central robot state or events to subscribers (or both).
The need for asynchronous peripheral APIs is easily demonstrated. If you need to send an I2C message then wait 200 millis to receive the response you can't block loop()
for that time in some projects. Existing peripheral API developers then need to choose between an ad-hoc polling API or just let their devices be unsuitable for realtime use cases. If we had a standard CircuitPython async/await
approach, there is 1 async user experience to target, and it's the "right" one that users/developers expect. Peripheral developers then can make synchronous and asynchronous apis with confidence, as it makes sense for their devices while users benefit from uniformity and expanded applicability of devices in interactive projects (e.g., robots and screens) where milliseconds matter.
There are existing successful implementations to pattern off of both in Python and MicroPython. I'm still eagerly hoping 2020 is the year of async
for CircuitPython!
Yes, for robots, blocking would be very bad, even for a very short interval. It could cause important event(s) to be missed. I would not want my robot to collide with an obstacle because it could not catch the distance sensor event. I really hope the team can come up with a good and sensible implementation of concurrency for Circuitpython.
One use case I haven’t seen expressed is that of having real-time counter displays being asynchronously driven while other things are going on, and being able to stop/start/reset those counters. I was going to do a project that needed those about 6 months ago, and gave up on CircuitPython.
Personally, I would be happy with being compatible with how micro python does threads, and then add higher-level constructs afterwards. That way those who need asynchronous threads can do it, even if they are hairy monstrosities and not elegant beauties. Some of the proposals look really nice, but are useless if they are not available.
@TonyLHansen you can already use the "meanwhile" library (https://github.com/deshipu/meanwhile) — it implements a simple async reactor mostly compatible with how the big Python async functions work. The only downside of it is that in the absence of internal mechanisms it works by polling, but that shouldn't be a problem for things like counters.
Threads are rather hard to implement on small microcontrollers with very limited memory, and they are very counter-intuitive to program (it's very easy to write a program that has race conditions).
What is the current status of this?
After skimming over the long discussion here, it seems like there was a sample implementation at #1415, but it won't be merged, and there's been a lengthy discussion comparing different approaches.
@bmeisels and I are working on an app framework for the AramCon Badge 2020, and some kind of async I/O will be super useful for us. Right now we're looking at the implementation from @deshipu, but we'd love to know what direction CircuitPython is going...
@deshipu, thank you for the response.
@TonyLHansen you can already use the "meanwhile" library (https://github.com/deshipu/meanwhile) — it implements a simple async reactor mostly compatible with how the big Python async functions work. The only downside of it is that in the absence of internal mechanisms it works by polling, but that shouldn't be a problem for things like counters.
While "meanwhile" looks like it can handle simple counters, I don't see any way to control the threads once they've started. The primitives aren't there. From the sample program, it also looks like one timer needs to know details about the other timer? Or is that a mistake in the sample program?
Threads are rather hard to implement on small microcontrollers with very limited memory, and they are very counter-intuitive to program (it's very easy to write a program that has race conditions).
I totally agree that they're hard to implement. But I'm convinced that you can create race conditions with ANY multi-tasking/threading setup.
As for being counter-intuitive to program, that's true with ANY paradigm shift. (If you're used to apples, then passion fruit can be weird.) That's no reason to prevent their use.
"Striving for excellence motivates... striving for perfection is demoralizing" -Harriet B. Braiker "Perfect is the enemy of good" -Voltaire
@TonyLHansen
There are no threads. This is cooperative multiprocessing — the tasks suspend their execution and let other tasks run explicitly, by yielding the control back to the main loop. The tasks don't have to know about each other's details, I'm not sure what you mean here.
I'm also not sure what kind of primitives you require. Maybe you could give me a simple example of the kind of a program you wanted to write with those counters, and I can show you how this can be done with that library?
But I'm convinced that you can create race conditions with ANY multi-tasking/threading setup.
No, there are setups that force your programs to be correct. I mean, obviously you can always create race conditions communicating with external systems, but that's unrelated to parallelization of your program — a completely single-threaded code can do that too.
What is the current status of this?
After skimming over the long discussion here, it seems like there was a sample implementation at #1415, but it won't be merged, and there's been a lengthy discussion comparing different approaches.
@bmeisels and I are working on an app framework for the AramCon Badge 2020, and some kind of async I/O will be super useful for us. Right now we're looking at the implementation from @deshipu, but we'd love to know what direction CircuitPython is going...
We don't have any immediate plans to add async. @dhalbert is currently working on _bleio
on Raspberry Pi with the Bleak
library which uses Python asyncio and may inform our long term direction.
Interesting related discussion here: https://forum.micropython.org/viewtopic.php?f=2&t=8429
These are some strawman thoughts about how to provide handling of asynchronous events in a simple way in CircuitPython. This was also discussed at some length in our weekly audio chat on Nov 12, 2018, starting at 1:05:36: https://youtu.be/FPqeLzMAFvA?t=3936.
Every time I look at the existing solutions I despair:
I don't think any of these are simple enough to expose to our target customers.
But I think there's a higher-level mechanism that would suit our needs and could be easily comprehensible to most users, and that's
Message Queues A message queue is just a sequence of objects, usually first-in-first-out. (There could be fancier variations, like priority queues.)
When an asynchronous event happens, the event handler (written in C) adds a message to a message queue when. The Python main program, which could be an event loop, processes these as it has time. It can check one or more queues for new messages, and pop messages off to process them. NO Python code ever runs asynchronously.
Examples:
When you want to process asynchronous events from some builtin object, you attach it to a message queue. That's all you have to do.
There are even already some Queue classes in regular Python that could serve as models: https://docs.python.org/3/library/queue.html
Some example strawman code is below. The method names are descriptive -- we'd have to do more thinking about the API and its names.
Or, for network packets:
For UART input:
Unpleasant details about queues and storage allocation:
It would be great if queues could just be potentially unbounded queues of arbitrary objects. But right now the MicroPython heap allocator is not re-entrant, so an interrupt handler or packet receiver, or some other async thing can't allocate the object it want to push on the queue. (That's why MicroPython has those restrictions on interrupt handlers.) The way around that is pre-allocate the queue storage, which also makes it bounded. Making it bounded also prevents queue overflow: if too many events happen before they're processed, events just get dropped (say either oldest or newest). So the queue creation would really be something like:
The whole idea here is that event processing takes place synchronously, in regular Python code, probably in some kind of event loop. But the queues take care of a lot of the event-loop bookkeeping.
If and when we have some kind of multiprocessing (threads or whatever), then we can have multiple event loops.