Closed jakkdl closed 2 months ago
I think this is a false positive with current implementation
async def foo(): with open(""): yield
i.e. the contextmanager is sync (and there are no awaits after the contextmanager?).
No, that's not a false alarm:
import contextlib, itertools, trio
resource_id = itertools.count()
@contextlib.contextmanager
def hold_some_resource():
n = next(resource_id)
print(f"acquire {n=}")
try:
yield
finally:
print(f"release {n=}")
async def loop_with_resource():
with hold_some_resource():
yield
yield
@trio.run
async def main():
for n in range(3):
async for _ in loop_with_resource():
if n == 0:
break
acquire n=0
acquire n=1
release n=1
acquire n=2
release n=2
release n=0
and this can still happen with a single un-looped yield, depending on how it's called (ie if you create the generator but never iterate it, or iterate into the context but not out)
No, that's not a false alarm:
I'm confused, your example seems to show that it is a false alarm, and that's also my understanding.
No, that's not a false alarm:
I'm confused, your example seems to show that it is a false alarm, and that's also my understanding.
The issue is that the release of 0 is delayed until the end, no? If we remove the break from inside the loop we get
acquire n=0
release n=0
acquire n=1
release n=1
acquire n=2
release n=2
which seems like sensible behaviour.
I guess this isn't technically an async issue, since the sync contextmanager can't contain any
Oh, I thought this could be reproduced by changing the generator to a sync one, but it seems to only happen with async generators in particular. So despite it looking entirely like a sync issue it's an async problem.await
s that will be arbitrarily delayed, but maybe this is sufficiently bad and unexpected in general that it's worthy of being warned about.
The issue is that the release of 0 is delayed until the end
Cleanup being delayed until the destructor is called is potentially surprising, but I'm not sure I'd call it a bug? To me, the really big problem is when cleanup doesn't happen at all because the cleanup code cannot await in the destructor, and I was expecting that to be the case we focus on.
I thought this could be reproduced by changing the generator to a sync one, but it seems to only happen with async generators in particular.
At a guess, I suspect this will be due to cleanup being done on refcount decrement for one but only in the garbage collector on the other — perhaps putting a gc.collect()
call in will clarify.
Yeah, if the generator is not iterated to completion then __(a)exit__
is delayed until garbage collection, which can be arbitrarily late - including after the loop is shut down, and calling gc.collect()
doesn't help in general because someone might still be holding a reference.
This is why https://peps.python.org/pep-0533/ exists, and why I'm speaking about it at the language summit next month!
See #211 @alicederyn
The issue is somewhat confusing at times to parse out what discussing pertains to async119 vs async102 vs hypotheticals, but I think this is what was settled upon.
I think this is a false positive with current implementation
i.e. the contextmanager is sync (and there are no awaits after the contextmanager?). Resolving that wouldn't be terribly complicated so I'll implement that if somebody confirms my understanding.
I didn't put much energy in formatting the entry in the readme, as that is on its way out. The links in the docs would probably be much cleaner with intersphinx or something, but leaving that for a different PR.