Closed douglas-raillard-arm closed 3 weeks ago
PR updated with:
nest_asyncio
when possible, as the greenlet version might execute the coroutine on a separate thread in some circumstances, whereas nest_asyncio
never needs to. This important as the jupyter lab use case unfortunately falls in the "needing the thread" category, since the all code (including the imports) executes inside a pre-existing coroutine.self.conn
(see call_conn
decorator). This has a user-visible effect that target.conn
might give a different object across calls to Target methods.PR updated with exception handling in coroutine and exception injection with .throw()
.
The state of this PR will stay as [Draft] until I add some unit tests to ensure correct operations (and fix all the items in the TODO list in this PR cover letter) as discussed with Vincent Coubard.
PR updated with:
devlib.utils.asyn.run()
, simulating various event loop setups including non-stdlib event loops (uvloop
). A fairly comprehensive set of nested run() calls is checked, as well as exception and value injection (coro.send()
and coro.throw()
)nest_asyncio
properly removed, as it relies on asyncio.get_event_loop()
will not create an event loop automatically if one is not setup when called from a non-main thread. asyncio.get_event_loop()
is deprecated and cannot be relied upon to create an event loop, so nest_asyncio
usage is basically broken. Since we want devlib to be importable in the non-main thread (no reason for it to explode), this is not acceptable and we can just fully switch to our new implementationPR updated with extra comments. I now consider it ready so I'll remove the [Draft]
status
Found an issue, please do not merge as-is.
EDIT: the issue is the following: devlib.utils.asyn.asynccontextmanager()
provides a wrapper over contextlib.asynccontextmanager()
to allow treating the async context manager as a regular blocking context manager. This is achieved by implementing the following:
def __enter__(self, *args, **kwargs):
return run(self.cm.__aenter__(*args, **kwargs))
def __exit__(self, *args, **kwargs):
return run(self.cm.__aexit__(*args, **kwargs))
This is fine as long as run()
simply re-enters an existing event loop. If these run()
calls are top-level calls, they will each create a new event loop and try to iterate over the async generator. This is a problem in two ways:
run()
call from __exit__()
is ends up seeing a closed async gen, confusing the stdlib implem that then raises RuntimeError("generator didn't stop after athrow()")
Issue 1. can be worked around by hijacking the mechanism the event loop uses to be aware of new async gen.
However, issue 2. is more tricky and is probably a real issue if e.g. the async generator tries to take an asyncio.Lock() across yield
calls. The lock future would be handled by the first event loop, then cancelled, and the generator would probably fail to run the 2nd iteration on another event loop.
I'll experiment to see what possibilities exist to fix this problem. This is fortunately the only place that relies on migrating a coroutine between multiple event loops. Trio encountered a similar problem: https://github.com/python-trio/trio/issues/2081
PR updated with:
_AsyncPolymorphicCM
now spins an event loop if necessary in __enter__
and closes it in __exit__
, so that both the __aenter__()
and __aexit__()
coroutines are executed on the same event loop. @marcbonnici, Vincent Coubard, Branislav Rankov, this should be ready for the last review/testing round. I consider it to be ready.
PR rebased. As of today we started dogfooding that PR in LISA's vendored devlib tree, so it should get some more real-world exposure in the coming weeks.
PR updated with extra tests to check the blocking API works when invoked from asyncio.to_thread()
. Everything does work.
PR updated with a fix to use run()
on the return values of anext()
for async generators, with the matching unit test.
PR rebased
Update with a check to only call loop.shutdown_default_executor() if it exists, since it was added in Python 3.9
https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.shutdown_default_executor
An issue was found, I'm currently working on fixing it, please do not merge until it's fixed.
PR updated with:
run()
implementation.__aenter__
and __aexit__
for async context managers.asyncio.Task
gets its own top-level instrumented coroutine. This is critical as a Task would not be able to yield on behalf of a coroutine in another task. This would deadlock, since the event loop would be polling task B while a nested coroutine inside task B would be trying to yield through task A. Task A would be ignored by the event loop until task B yields, leading to a deadlock.One thing I realized and might want to change is that if an event loop is already running (e.g. in a jupyterlab notebook), we will dispatch the coroutine on a loop setup in a separate thread. It's all good except we have a single such thread.
This means that code making parallel devlib invocation of devlib in threads with a pre-setup event loop will end up being serialized. It shouldn't be very hard to change, I'll see if I can do that tomorrow
PR updated with:
_CoroRunner
subclass for each case._CoroRunner
. All we now need is to get a runner that will be usable to execute multiple coroutines on the same event loop, even if that event loop is sitting in a separate thread.contextvars
test and fixed the code. contextvars
are now propagated down through devlib.utils.asyn.run()
and any update done inside the coroutine will be propagated back in the caller of run()
, so it is completely transparent.Considered it is a substantial change, I'd be more comfortable dogfooding it in LISA for a little while before we consider merging it.
1.5 month later there has been no reported issue, so I think it's good to go
Considering https://github.com/ARM-software/devlib/issues/682, this PR updates devlib to use
greenlets
in cases wherenest_asyncio
cannot be used (e.g. usinguvloop
event loop).This alternative path works by using
greenlet
to provide a way for nested coroutines separated from the top-level coroutine by blocking calls to yield their action.TODO:
__await__
, so that they are propagated to the correct coroutine rather than the top-level oneFixes #682