Open GoogleCodeExporter opened 9 years ago
Hi,
This seems suspiciously similar to BaseSubprocessTransport and
SubprocessProtocol. It seems like it could be implemented in a very similar
matter using sys.stdin and sys.stdout. Thoughts?
Original comment by elizab...@sporksmoo.net
on 17 Nov 2014 at 9:19
You can use loop.add_reader(fd, cb) and add a callback function to stdin file descriptor and change tty settings to cbreak. Or you can make a coroutine to read from curses screen getch method with nodelay option. I'm sure there is a more robust/complicated way to this, maybe including StreamReader.
I implemented something similar as part of this project. It provides the following coroutines:
get_standard_streams(*, use_stderr=False, loop=None)
: return two streams corresponding to stdin
and stdout
(or stderr
)ainput(prompt=None, *, loop=None)
: asynchronous equivalent to ainput.Everything is implemented in stream.py. It should work even if sys.stdin
and sys.stdout
don't have a file descriptor (inside IDLE for instance).
https://gist.github.com/nathan-hoad/8966377 - is it good ? @vxgmichel
I'm vote on implementing stream wrappers for generic file objects.
Something like simple and stupid
(reader, writer) = asyncio.wrap_fileobject(fileobj)
(use fileobj.read()
, fileobj.write()
, fileobj.flush()
internally)(reader, writer) = asyncio.wrap_file_descriptor(fd)
(use os.read(fd)
, os.write(fd)
internally)(reader, writer) = asyncio.wrap_streaming_socket(socket_obj)
(use socket_obj.send()
, socket_obj.recv()
internally)@socketpair There are a few differences between the example in your link and the way I wrote it:
sys.stdout
instead of os.fdopen(0, 'wb')
(I'm not sure what is best though)StreamReader
and StreamWriter
to avoid closing the stream in the __del__
methodstdin
and stdout
don't support the file interface (e.g. in IDLE)About the wrappers you described, I'm not sure it's a good idea to create high level streams from low level objects (file objets, descriptors, sockets). For instance, in order to open a new socket connection, you can either use:
loop.create_connection
: return (transport, protocol) from a socket (or host and port)asyncio.open_connection
: return streams from host and portSame thing for subprocesses:
loop.connect_read_pipe
, loop.connect_write_pipe
: return (transport, protocol) from a pipeloop.subprocess_exec
: return (transport, protocol) from a commandasyncio.create_subprocess_exec
: return a high level process object from a commandSo I would expect file streams to work the same:
loop.connect_read_file
, loop.connect_write_file
: return (transport, protocol) from a file descriptorasyncio.open_file
: return streams from a file name@vxgmichel unfortunatelly, there are cases where file descriptor is pre-exist. For example, systemd's socket activation, or xinetd.
Also, if stdout/stdin is pipe, input and output may block easily, so wrapping them to asyncio stream is convenient.
FWIW an issue with my gist is that it will break print() calls for sufficiently large output, because stdout is... surprise surprise, non-blocking. Even if you decide to only have a non-blocking stdin you'll hit issues, because stdin and stdout are actually the same object for TTYs, as per this issue: https://github.com/python/asyncio/issues/147.
Also, os.fdopen(1, 'wb')
and sys.stdout are interchangeable and there's no reason to use one over the other.
Original issue reported on code.google.com by
victor.s...@gmail.com
on 2 Nov 2014 at 10:55