Open asvetlov opened 7 years ago
It would be hugely backward compatible. It would also be a huge bug magnet -- diagnostics if you forget to await the close() call are minimal.
It would be hugely backward compatible. It would also be a huge bug magnet -- diagnostics if you forget to await the close() call are minimal.
@gvanrossum sorry, was it about option [1] or [2]?
I believe @gvanrossum wrote about returning a future from .close()
.
But diagnostics for mandatory calling .wait_closed()
after .close()
is even worse, isn't it?
Right now I'm making every .close()
function a coroutine at first in new code -- but it was hard learning lesson, transition from just function to coroutine is painful.
Unfortunately at time of inviting streams API we had no such insight.
That was about making close() a coroutine.
But what is the failure mode if you don't use wait_closed()? The stream does get closed, just on the next trip through the loop, right?
The same is true if user doesn't call await writer.close()
but just writer.close()
.
In any case the stream will be closed safely.
I don't propose raising any exception from await .close()
/await .wait_closing()
but waiting for Protocol.connection_lost()
signal.
Stream closing swallows closing exception now and let's keep the behavior.
loop.call_exception_handler()
is enough and it is the best what we can do without introducing backward incompatible chages.
Otherwise we should suggest await sleep(0)
after any writer.close()
call which smells bad.
But what is the failure mode if you don't use wait_closed()? The stream does get closed, just on the next trip through the loop, right?
Yes. It's not deterministic: you don't now if it's just one trip through the loop or more. It's generally a pain to close streams in unittests and asyncio programs that wan't to cleanup. It's really painful to add ugly workarounds like asyncio.sleep(0.001)
at the end of any snippet of code that works with streams.
I'm with Andrew for fixing this, I wanted to propose this myself since long time ago.
OK, but unittests are different from regular apps. They can await s.wait_closed().
Also, if close() became a coroutine, calling it without waiting for it would do nothing -- the body of a coroutine doesn't get executed until you wait for it (same as for generators). There are hacks around it but I think it's an anti-pattern.
No, I don't propose converting .close()
to coroutine.
What I want to do is leaving it as regular function which returns a future object.
The future will become ready on Protocol.connection_lost()
call.
It's anti-pattern maybe.
I'd love to make writer.close()
a coroutine from very beginning.
Unfortunately the ship has sailed but we still could return a future from .close()
, satisfy all current contracts but allow to wait for actual closing.
Please, no. There are many close() methods and it would be confusing which return a Future and which don't.
I agree with Guido, if we want to fix this let's add a distinct new method. The ship for re-designing "close()" has sailed.
For the record here, I would like to mention, that it's only 1 loop trip for simple TCP transport, which is definitely not the case with SSL. I bumped into the issue on the week (in aiokafka development), as there is no safe way to wait for SSL to close if used in streams.
The problem is:
writer.close()
is a regular method which callstransport.close()
.But
transport.close()
actually closes the connection only on next loop iteration byself._loop.call_soon(self._call_connection_lost, None)
call`.Now a safe way for stream closing is
but it looks ugly.
I see two possible solutions:
writer.wait_closed()
coroutine.writer.close()
signature to return a future object. The future will be resumed on protocol'sconnection_lost()
callback.Honestly I vote on point 2: it looks more native and fully backward compatible with existing code.