Closed dabeaz closed 4 years ago
I don't see how to make attaching the generator to the task work -- it's legal to start a generator in one task and then pass it to another, right? What if the parent task exits while the generator is still in use?
As you may remember I'm really really not a fan of the async generator gc design, exactly because it creates these kinds of issues. I mean, it's fine for asyncio, because asyncio just gives up on worrying about implicit concurrency, context preservation, exception propagation, etc. But in curio, relying on gc to clean up generators basically breaks every invariant in curio's API. (I mean, just to start, notice that it's a way to spawn a task from sync code...) It also makes it very easy to write code that will work OK on CPython, but then on PyPy they'll pass their testsuite but blow up under load, which is unhealthy for the ecosystem -- even if I know I'm careful about cleaning up resources, I have to assume that random libraries on PyPI are all broken...
My preference would be to document that this is a flaw in the language and that you need to use async with
blocks on your async generators, and only use the hooks to detect cases when people forgot to do this and issue noisy warnings suggesting that they add async with
blocks. And hope for PEP 533 or similar to be implemented for 3.7. (Which is more likely if we stick fast to our principles instead of implementing half-working workarounds.)
To be sure, the GC of async generators is less than ideal. How much leverage do you want to have in pushing for PEP 533? I'd be perfectly fine in having Curio take a hard-line and do nothing more than issue warning messages. Maybe even linking to the PEP in the warning.
That said, if its not going to be fixed, there are things that could probably be done in Curio. For example, associating async generators with the task in which they were started and forcing them closed when the task exits. Yes, it would prevent things such as passing an already started generator to another task. However, in thinking about that, I don't think I've ever even passed an already started generator to another function let alone another task/thread in my whole experience of using generators. So, it's not going to bother me to disallow that.
Thought about it. I'm going to go one better than adding a noisy warning about this. I added an outright failure with a RuntimeError
if anyone uses an async generator outside of a guarding context manager. So, you have to do this:
from curio import async_generator
async def coro():
async with async_generator(some_generator()) as agen:
async for item in agen:
...
I'm not totally sure on the name. Nathaniel used aclose()
for the context manager in earlier writing. I have used the name async_generator()
here solely because it more explicitly states what it's being used for. I'm not married to that though. Open to suggestions on naming.
Admittedly, I don't consider it ideal to have to use the enclosing context manager. However, in the big picture, I'd rather have correctness than a bunch of tricky acrobatics going on in the kernel to make async generator finalization work "correctly."
Oh, brilliant. That is a way better idea than I had! +100
It's unfortunate that the generator object is created first before the context manager, so we can't raise RuntimeError directly from the init hook. I guess the rule is that after creating an async gen, you have to pass it to the context manager before your next yield, or else we throw an exception into the yield?
I've used "aclosing" with the idea that it was an async analogue to the stdlib "closing": https://docs.python.org/3/library/contextlib.html#contextlib.closing
"aclosing" is a natural context manager to have around in any case, and if you have it around and are just looking for a way to make sure async generators get their aclose method called then it's the obvious thing to reach for. But yeah, here we're talking about a different set of semantics.
I guess despite that, one option would be to have a curio.aclosing that acts like you'd expect with arbitrary objects, but also has the special feature that it detects async generators and marks them as OK.
Otherwise, I guess I'd probably reach for something shorter than async_generator in this case, because this is a bit of repetitive boilerplate that's already going to be prone to super-long lines. Maybe agen() or asyncgen()?
It might also make sense to provide a version for synchronous generators. They don't invoke a special sys callback when instantiated, so we can't enforce its use the same way. But it's a good habit to close them in the same way, and being able to write both cases following a consistent pattern helps support that uniform style.
One more thing to consider: it might make sense to have the wrapper silently accept objects that don't have aclose methods. Rationale: someone might want to switch between implementing any given async iterator as a generator or not, and at least this way it'd be possible to write code that works either way. It's still far from optimal, because it's only enforced for async generators, so in practice going from aiter->agen is probably a compatibility break because people have been careful about using this. But better than nothing?
It doesn't even make it to the yield or start the generator at all right now. If it's not wrapped, it crashes straight-away.
If we're going to make it work with arbitrary objects, maybe it could be generalized into a finalize()
context manager. For example:
async with finalize(some_generator) as c:
for n in c:
...
What "finalize" means depends on what you give it. If it's an async generator, it will call aclose()
on it. Maybe there are other things that could be finalized as well. I don't know.
Oh, I see -- I thought the gc hooks were called when you called the generator function. If they're not called until the generator is iterated for the first time, then that's much more convenient.
I changed the name to finalize() instead. Makes it more open-ended (in case we need to support something else in there). Also, made it a no-op for non-async generators.
Another thought: it might make sense to provide a decorator that can be applied to the generator definition, that marks it as "no really I promise there is no async cleanup code here and you don't need to use finalize
when calling it"?
Motivating examples would be:
Cases like this one: https://forum.dabeaz.com/t/whats-the-best-way-to-make-interfaces-with-async-generators/143/1
Cases like an acontextmanager
helper (like contextlib.contextmanager
but async)
The decorator idea is good one for now. That could definitely be made to work in some way. What would it be called?
@async_generator
async def countdown(n):
while n > 0:
yield n
n -= 1
Other options: @safe_generator
. @nofinalize
. ???? Open to ideas.
It's a bit more diabolical, but it seems that you can also detect "unsafe" generators by inspecting their opcodes with the dis
module. So, that is another option.
Actually, I like the approach of inspecting the code object of async generators to see if they might be unsafe. I've committed this. Basically, you don't have to do anything at all. If you write something simple like this:
async def countdown(n):
while n > 0:
yield n
It will just work fine:
async for n in countdown(5):
...
On the other hand, if you put anything in there that requires async-finalization, you'll get an error. For example:
async def countdown(n):
try:
while n > 0:
yield n
finally:
await sleep(1)
To run this, you'll need to use finalize():
async with finalize(countdown(5)) as c:
async for n in c:
...
I suspect that this will make things simpler if there's no finalization code. I've also added a @safe_generator decorator that can override this.
Huh! Interesting. I guess the static analysis will never work on the quacks-like-an-async-generator objects from my async_generator
package, but so it goes. (Currently they don't call the gc hooks either, but I've been debating changing that, and with the addition of finalize
I'm leaning towards making the switch.) It also likely won't be totally accurate if 3.7 adds yield from
(as the async generator level, not await
), because for a generator that yields from another generator you can't see all the possible yield
points.
Actually, that's the nastiest case -- async_generator
actually used to provide a way to do yield from
in 3.6 native generators, though it's currently disabled because I couldn't figure out how to convince ctypes to do refcounting properly. It looks like:
async def foo():
await yield_from_(bar())
Or...... maybe we're saved because then bar()
itself will get put through the safety checking machinery?
I guess we're sort of threading a fine line with all of this. Ideally, I'd like to make Curio work with as little fuss as possible--especially for common cases. For some of the nasty corner cases, maybe making a best effort to steer people in the right direction is about the best we can hope for right now.
So far as I know, async generators aren't linked to asyncio at all. However, there is an issue with garbage collection that involves a synchronous callback function. That is what this issue is mostly about.
So the issue here is that if asynchronous generators never get run to completion and thus never unwind their stack in an "asynchronous context", they'll get garbage collected and __del__
will be called, which is a synchronous context, and thus they'll never actually finalise things properly?
Is there a reason one can't just do this:
def __del__(self):
curio.run(self.clean_up())
with some sort of decorator that people can put on their async generators? It's probably better to make 'library' code uglier than 'client' code, right? Write add_finaliser
once when you create a generator, rather than once every time you ever use it.
@curio.add_finaliser
async def foo():
async with connect() as conn:
for url in urls:
yield (await cpnn.get_stuff(url))
@milesrout: actually that decorator effectively "comes stock" on native generators -- there are some thread-local finaliser callbacks in sys
that they call automatically. So it's certainly possible to do something like that (though it would need to be more like a curio.spawn
than a curio.run
). But the problem is that then suddenly – with no warning to the user – you have arbitrarily complex chunks of code getting pulled out of their original task and running at another time, concurrently with code that might not be expecting it, without the original context like task-local storage and timeouts (!!), and with any exceptions disappearing into the ether. Arguably the major principle underlying curio's whole design is to avoid that kind of implicit concurrency and loss-of-context, so it would be unfortunate to reintroduce it here.
Making the client code "uglier" is unfortunate, but I'd rather a bit of ugly syntax to a lot of ugly semantics :-).
If you want to read more about this argument, it's a major theme in this blog post.
@njsmith Thanks, that's a good explanation.
After some digs of golang's goroutine:
And sync generator in pypy also not clean up, until calling gc.collect()
.
Maybe we can only rely on GC or explicit close generator by the caller, no other choices.
Can we not just wait for PEP 533 to be accepted?
PEP 533 doesn't have any clear path to acceptance right now.
On Nov 8, 2017 20:11, "Miles Rout" notifications@github.com wrote:
Can we not just wait for PEP 533 to be accepted?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dabeaz/curio/issues/176#issuecomment-343042254, or mute the thread https://github.com/notifications/unsubscribe-auth/AAlOaFzmnOYaJxctdI_elH5HNUwNu6i9ks5s0ntigaJpZM4L9mCb .
It needs to happen. Async generators are basically useless right now.
Is it a good idea to automate wrap all for loops in with block if the iterable is also a context manager?
make for loop:
for x in gen:
...
equal to:
with gen:
for x in gen:
...
No need to add new __(a)iterclose__
slot.
An example usage:
for line in open('/some/file.txt'):
...
# the file automatic closed after the for block
Simply don't run any async code on GeneratorExit. Used async functions should also be careful about it.
async def gen():
try:
for i in range(10):
yield (await some_async_func())
except GeneratorExit:
print('Generator Exited')
raise
except:
await sleep(10)
An idea and rough implement of using contextvars
(PEP 567) to solve generator finalize problem: https://gist.github.com/guyskk/2b307db29684f1eef95848e68cc3aa6a
defer
decorator for generators which need finalize, this will register generator to current context.deferable
decorator for functions which use generators, this will finalize registered generator in current context.defer
is borrowed from golang, which means finalize something on function exiting, since normal python functions not support defer things, we add deferable
decorator.
This solution is friendly for using generators because we needn't care about which generator need finalize. curio can make all task's coroutine function deferable automatically, so in most time we needn't do anything to finalize generators, it's even better if we can make all python functions deferable.
May I ask why is PEP 533 not being considered? Did any discussion take place and where can I find it? I googled but found nothing....
Edit: I've found it, on the python mailing list.
one of the things that this breaks, that I don't see mention of here, is the ability to use the @asynccontextmanager decorator in the contextlib to turn a function that yields something, into a context manager.
The example I have is that I have a curio server, and I have some server setup performed in an initialize function that includes async methods (such as opening a FileStream or Socket)... said initialization does require cleanup (closing the FileStream) and without making it into a class unnecessarily the best way to do so is using a context manager before starting most of the server tasks...
@contextlib.asynccontextmanager
async def setup(cfg):
try:
# setup application
yield application
finally:
await example_async_object.close()
async def main(cfg):
async with setup(cfg) as app:
# do stuff
As written above it will blow up due to the aforementioned checks against unsafe generators... but I feel like this example is one of the more common cases (hence the existence of the contextlib decorators)
Context managers seem to work fine in the version of Curio that's in Github. Is there something else that's broken?
It's rather annoying that curio doesn't allow async_generator.aclosing()
but rather requires its own curio.meta.finalize()
which does pretty much the same thing.
It's rather annoying to get whiny comments with no code examples and no suggestions for patches.
Code for aclosing()
: https://github.com/python-trio/async_generator/blob/master/async_generator/_util.py#L6 [edited]
Code sample that explodes when run:
import curio
from async_generator import aclosing
async def iterable():
try:
yield 'blah'
finally:
await curio.sleep(0)
async def main():
async with aclosing(iterable()) as iter:
async for val in iter:
print(val)
curio.run(main)
Make sure you've also installed async_generator before running.
When aclosing()
is replaced with curio.meta.finalize()
, it works fine.
Thanks. Will investigate further.
After some investigation, I don't think there's any obvious way for Curio to detect the use of async_generator.aclosing()
and to account for it. The implementation simply returns the associated iterator back unmodified without any extra annotation or record being made that the iterator is being actively managed (unlike curio.finalize()
). If the goal is to write code that uses aclosing()
as an API, you're best bet is probably to use:
if using_curio:
from curio.meta import finalize as aclosing
else:
from async_generator import aclosing
Given that async_generator
isn't part of the standard library, it's not something I'm likely to pursue further. It would be easier to support in Curio if aclosing
had a feature such as the Curio finalize.is_finalized()
method.
Can you provide a brief status summary for async generators in general and curio's position in particular?
Async generators (in general) work fine, but if there are async operations appearing in finally blocks, steps need to be taken to ensure that they actually execute. Curio can detect such code and issue an error message. If you want to get rid of the error, you'll have to use curio.meta.finalize()
. My official position is that I'm okay with Curio reporting a problem and suggesting a solution.
It occurs to me that much of the complexity is simply Curio detecting and reporting an error. If I were to remove that, then it would be easy to support async_generator.aclosing()
. I'd have to add some kind of cautionary note to the FAQ or developers guide about async generators though--making sure that they get proper resource management.
What surprised me was the fact that curio was going to great lengths to prevent users from shooting themselves in the foot with async generators, even to the point of using bytecode analysis. The reason why this was so surprising because the fundamental difference between curio and trio was supposed to be that trio will do everything in its power to prevent the user from doing the wrong thing, but even trio does not try to prevent the user from using async generators the wrong way. What this ultimately boiled down to was that curio only allowed its own safeguard against this and summarily rejected all others.
What you suggested (removing the checks and documenting this pitfall in the guide) would be way less error prone IMHO.
There's always been an experimental side to Curio. At some point early in the addition of async generators, it was known that they might not be finalized correctly in certain cases. There was a hook that got added to the standard library to track async generators, but it was never something that could be used effectively by Curio because the underlying API was synchronous. So at best, the only thing you could do was detect that code was "unsafe" and issue some kind of warning/error about it.
To be honest, I'd be more than happy to simplify this facet of Curio. I'd still keep the finalize()
feature of Curio around, but a simplification would allow the aclosing()
feature to work too.
Yeah, I would love for Trio to do something more helpful here, the current situation with async generators in Trio is gross (e.g. https://github.com/python-trio/trio/issues/638). The reason Trio doesn't do anything more so far is just because this is a super complicated topic and I can't tell what we should be doing. All the solutions so far have pretty messy trade-offs (e.g. the issue @agronholm is hitting).
Looking at the code earlier this morning, I realized that I'd have to do even more code inspection or some other kind of gross hack to make aclosing()
work. I really don't want to do that. I think I'm just going to remove all of that and go for simplicity. Users will just have to know the pitfalls and workarounds.
Context managers seem to work fine in the version of Curio that's in Github. Is there something else that's broken?
They do indeed, I was using the latest official release... I do like removing the error checking as long as it's called out decently in the documentation. Curio itself even discusses async generators (briefly) but doesn't bring up the complex issues with them.
@dabeaz, I actually appreciated the warning I got from curio when I used an async generator without calling curio.meta.finalize()
. I would've been completely lost as to why my code wasn't working without that warning.
On 11/19/18, Julian Mehnle notifications@github.com wrote:
@dabeaz, I actually appreciated the warning I got from curio when I used an async generator without calling `curio.meta.finalize()
@dabeaz: What is the performance cost of this? May be people who take appropriate measures would not want to pay for it..
@dabeaz: What is the performance cost of this? May be people who take appropriate measures would not want to pay for it..
The result of the code analysis is cached so there's not much of a penalty.
On 11/19/18, Alex Grönholm notifications@github.com wrote:
@dabeaz: What is the performance cost of this? May be people who take appropriate measures would not want to pay for it..
The result of the code analysis is cached so there's not much of a penalty
Caching is fine. But relying on it, is not a robust design.
Caching is fine. But relying on it, is not a robust design.
Why? Can you think of a scenario where this could yield the wrong results?
I'm not sure "robustness" is quite the issue on this (the code works fine--for now). The fact that it's having to examine byte codes feels generally fragile though. Byte code has changed between Python versions and there's no guarantee that it would continue to be the same in future releases. All things equal, I'd prefer not to do that. There is also the issue of interoperability with other code (such as the aforementioned aclosing() function).
Debugging is definitely a valid concern as well. I wonder how common it is in practice to write async generators that have finalization issues?
I wonder how common it is in practice to write async generators that have finalization issues?
People seem to hit it fairly often in Trio, but probably the most common is with yield
inside a cancel scope or nursery, and those are the two cases where even finalize
/aclosing
don't solve the issue.
Turns out that it is easy to detect when async generators aren't being finalized by installing a finalization hook to look at them right before they're garbage collected. So, I've added that back into Curio. Unlike the previous implementation, it does not do any code inspection or anything fancy. Just a simple check--if the generator hasn't been finalized properly, a RuntimeError exception gets raised with basically the same warning message as before. Functions such as async_generator.aclosing()
should work with this.
Out of curiosity ;) can you direct us to related code lines for async gen handling? For example, how do you catch an unsafe generator for raising above exception? I browsed through meta.py but, unable to find out how they all work.
Looks like the code is here: https://github.com/dabeaz/curio/commit/3d610aea866178800b1e5dbf5cfef8210418fb58
Some subtleties to be aware of:
Since the finalizer runs in GC context, any exceptions it raises are caught and printed on the console. So that raise
is effectively a warn
.
I'm not sure if this properly handles the case where an async generator is created in curio.run, but not collected until after curio.run exits. I think it depends on whether the agen captures the hook when it's created, or when it's collected, and I can't easily check that right now.
So, I've been looking at Curio's interaction with async-generators. This is a new feature of Python 3.6. For example::
Okay, it's a bit whimsical, but it all works out fine. At the conclusion of the
async-for
loop, you see the code in thefinally:
block run. You get output like this::That's all well and good until something bad happens. For example, maybe the for-loop gets prematurely terminated::
Now, when you run the code, you get this output:
Without getting too bogged down in the details of PEP-525, the gist of the issue is when an async-generator gets garbage collected, something (namely the event loop) needs to take responsibility for finalizing it--basically running it through the steps needed to make it terminate and handling all of the async-related calls that might take place as a result (e.g., that embedded
sleep()
call).There is a hook to plug into all of this and I can make it work with Curio. That's not the issue. My question concerns the exact run-time context in which this finalization might occur.
For example, one approach is to hand over all generator finalization to a separate task (maybe the in-kernel loopback task) that simply deals with it as needed. Disadvantage is that code that finalizes a generator would execute in a different task than the one that was running when the generator was initiated. So, that would be potentially kind of weird. Especially if you were making use of task-local storage or some other task-specific context.
Another option is to run the finalization under the umbrella of the creating task. It's a bit tricky, but Curio could certainly track async generators and make sure that they are finalized in the same context in which they were originally initiated. Tasks could also explicitly cancel all pending async generators if they're about to be cancelled and cleaned up.
Anyways, I'm going to implement something to deal with this as a starting point. I doubt it will be the ending point, but I'd be interested in getting thoughts about all of this.