python-trio / trio

Trio – a friendly Python library for async concurrency and I/O
https://trio.readthedocs.io
Other
6.07k stars 330 forks source link

High-level helpers for working with unix domain sockets #279

Open njsmith opened 7 years ago

njsmith commented 7 years ago

Now that the high level networking API is coming together, we should also add unix domain helpers. Should be pretty straightforward.

Some notes here: https://github.com/python-trio/trio/issues/73#issuecomment-285752193

I guess something like:

async def open_unix_stream(path):
    ...

async def open_unix_listeners(path, *, mode=0o666, backlog=None):
    ...

mode=0o666 matches what twisted does; tornado does 0o600. Should research which is better as a default.

The biggest issue is to figure out what to do about unlink-on-close. It's nice to keep thing tidy, but it introduces a race condition if you're trying to do no-downtime upgrades...

njsmith commented 7 years ago

If we do do unlink on close, then it's important to note that getsockname is not reliable on AF_UNIX sockets. In particular, if you bind a socket and then rename the inode, getsockname will return the name you originally bound to, not the name the socket has now.

So one option would be SocketListener(..., unlink_on_close=None) / SocketListener(..., unlink_on_close="/path/to/unlink"). Another option would be to add a sockaddr=... option, which I guess we'll need anyway for #280, and then unlink_on_close=True would trust this instead of calling getsockname.

njsmith commented 6 years ago

I think the locking algorithm in https://github.com/python-trio/trio/issues/73#issuecomment-285752193 will work, but it's super annoying. So I'm leaning towards: start by implementing a version where we never clean up stale socket files, and then if/when someone complains, implement the full locking version, so we can keep the same API. [Edit: or maybe not... asyncio doesn't clean up stale socket files, and uvloop originally did. From this we learned that starting to clean up socket files when you didn't before is a backcompat-breaking change. So we should probably pick one option and stick to it.]

Other libraries, in their "try to blow away an existing socket file first" code, check to see if it's a socket and only unlink it if so. We aren't going to unlink the file, but we might still want to do a just-in-case check where if the target file exists and is not a socket, then error out instead of renaming over it.

Fuyukai commented 6 years ago

Hi, has this had any work at all? If not, I'd like to take an attempt at making it. I'm working on converting one of my libraries to support curio and trio, and this is a missing part for a bit of it for the trio side.

njsmith commented 6 years ago

Are you looking to implement helpers for clients, servers, or both? The client part should be pretty straightforward; the server part is more complicated and requires some subtle decisions that I've been procrastinating on :-). Either way, there's no one currently working on this, and if you want to go for it then that'd be awesome.

On Dec 16, 2017 2:51 PM, "Laura F. D." notifications@github.com wrote:

Hi, has this had any work at all? If not, I'd like to take an attempt at making it. I'm working on converting one of my libraries to support curio and trio, and this is a missing part for a bit of it for the trio side.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/python-trio/trio/issues/279#issuecomment-352217461, or mute the thread https://github.com/notifications/unsubscribe-auth/AAlOaCQ7Fo1GgbaiXxRIVWOtscUuQohLks5tBElhgaJpZM4O0W4_ .

Fuyukai commented 6 years ago

I was looking at implementing both, but if you're waiting on design decisions I only need the client part so I can just restrict it to that for now.

njsmith commented 6 years ago

[Just discovered this half-written response that's apparently been lurking in a tab for like 3 months... whoops. I guess I'll post anyway, since it does have some useful summary of the issues, even if I didn't get around to making any conclusions...]

@SunDwarf Well, let me brain dump a bit about the issues, and maybe it'll lead to some conclusion, and maybe you'll find it interesting in any case :-)

The tricky parts are in starting and stopping a unix domain listening socket, and there are two issues that end up interacting in a complicated way.

First issue: the ideal way to set up a unix domain listening socket is to (a) bind to some temporary name, (b) call listen, and then (c) rename onto the correct name. Doing things this way has a really neat property: it lets us atomically replace an old listening socket with a new one, so that every connection attempt is guaranteed to go to either the old socket or the new socket. This can be used for zero-downtime server upgrades. In fact, this is the only way to do zero-downtime server upgrades without complicated extra machinery for coordinating the hand-off between servers – e.g., there is no way for one TCP listening socket to atomically replace another – so this is one of those things where if you need it, then you really need it. And there really aren't any downsides to doing things this way, so we should always do the bind/rename dance when setting up a listening socket.

Second issue: tearing down a socket. Unix domain sockets have really annoying semantics here: when you close a listening socket, it leaves behind a useless stray file in the filesystem. You can't re-open it or anything; it's just a random piece of litter. One consequence is that when creating a listening socket, you might need to clean up the old socket; the consensus seems to be that you should just go ahead and silently do this. The rename trick above automatically takes care of this. So leaving these behind doesn't really cause any problems; it's just ugly. To avoid that ugliness, many libraries try to delete (unlink) the socket file when shutting down.

Now here's the problem: if we want to support zero-downtime upgrades, we might be replacing the old server's socket with the new server's socket. If this happens just as the old server is shutting down, and the old server has code to clean up the socket file when it shuts down, then if the timing works out just right it might accidentally end up deleting the new socket file. This puts us in an difficult position: zero-downtime upgrades are a feature not many people need, but that's irreplaceable when you need it; OTOH not cleaning up old sockets is a low-grade annoyance for everyone, including lots of people who don't care about zero-downtime upgrades. And for bonus fun, we don't still don't have a full solution for zero-downtime upgrades currently, because we don't have a good way to do graceful shutdown of a server (meaning: drain the listening socket and then close it). See #14, #147.

We have a few options:

Bonus: it's not trivial to switch between these options later. For example, here's a bug filed on uvloop where code was breaking because asyncio doesn't clean up unix domain listening sockets, so someone wrote code that assumes this and tries to clean up manually, and uvloop did clean them up, so the manual cleanup code crashed. There are some things we could do to mitigate this, like put a giant warning in the docs saying that you shouldn't make assumptions about whether the stale socket gets left behind. But who knows if people would listen.

Quite a mess, huh?

njsmith commented 5 years ago

Now that the latest versions of Windows support AF_UNIX, I guess we'll have to take that into account as well.

According to some random SO answer, there is a way to get atomic rename on recent Windows (AFAICT "Windows 10 1607" is well before the AF_UNIX work): https://stackoverflow.com/a/51737582

[Edit: @pquentin found the docs! https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/ntifs/ns-ntifs-_file_rename_information ...From this, it's actually not clear whether it is atomic?]

Unfortunately I won't believe we got the Windows details right without testing them (e.g. how does Windows handle a renamed AF_UNIX socket?), and I don't know Appveyor has a new enough version of Windows to let us test them yet.

Tronic commented 5 years ago

Is there any way to find out if the socket being closed is the final copy of that socket, or alternatively to keep track of copies created by fork etc? The socket should not be unlinked as long as there are other copies still listening.

I'm using socket mtime to track if it has been replaced by another instance. There are a few race conditions (identical timestamps for sockets created at the same time, or someone getting in between checking of mtime and unlinking because I don't bother to use locks).

smurfix commented 5 years ago

That's easy. Try to connect to it after closing your listener.

Tronic commented 5 years ago

Closing still needs work and I have to work on something else. Here's the work so far:

class UnixSocketListener(trio.SocketListener):
    def __init__(self, sock, path, inode):
        self.path, self.inode = path, inode
        super().__init__(trio.socket.from_stdlib_socket(sock))

    @staticmethod
    def _create(path, mode, backlog):
        if os.path.exists(path) and not stat.S_ISSOCK(os.stat(path).st_mode):
            raise FileExistsError(f"Existing file is not a socket: {path}")
        sock = socket.socket(socket.AF_UNIX)
        try:
            # Using umask prevents others tampering with the socket during creation.
            # Unfortunately it also might affect other threads and signal handlers.
            tmp_path = f"{path}.{uuid4().hex[:8]}"
            old_mask = os.umask(0o777)
            try:
                sock.bind(tmp_path)
            finally:
                os.umask(old_mask)
            try:
                inode = os.stat(tmp_path).st_ino
                os.chmod(tmp_path, mode)  # os.fchmod doesn't work on sockets on MacOS
                sock.listen(backlog)
                os.rename(tmp_path, path)
            except:
                os.unlink(tmp_path)
                raise
        except:
            sock.close()
            raise
        return UnixSocketListener(sock, path, inode)

    @staticmethod
    async def create(path, *, mode=0o666, backlog=None):
        return await trio.to_thread.run_sync(
            UnixSocketListener._create, path, mode, backlog or 0xFFFF
        )

    def _close(self):
        try:
            # Test connection
            s = socket.socket(socket.AF_UNIX)
            try:
                s.connect(self.path)
            except ConnectionRefusedError:
                if self.inode == os.stat(self.path).st_ino:
                    os.unlink(self.path)
            finally:
                s.close()
        except Exception:
            pass

    async def aclose(self):
        with trio.fail_after(1) as cleanup:
            cleanup.shield = True
            await super().aclose()
            self._close()

async def open_unix_listeners(path, *, mode=0o666, backlog=None):
    return [await UnixSocketListener.create(path, mode=mode, backlog=backlog)]

The test connection trick doesn't work perfectly (closing races, I think), and in this version the cleanup is blocking. For some reason, trio.to_thread.run_sync fails with RuntimeError('must be called from async context')

Tronic commented 5 years ago

Minor irritation with atomic rename approach: the original random socket name also appears in peername seen by clients.

njsmith commented 5 years ago

Is there any way to find out if the socket being closed is the final copy of that socket, or alternatively to keep track of copies created by fork etc? The socket should not be unlinked as long as there are other copies still listening.

Maybe it's simpler to just offer users a way to disable auto-deletion? (Assuming we support auto-deletion at all...)

Minor irritation with atomic rename approach: the original random socket name also appears in peername seen by clients.

Oh, huh, good point. And we'll also see it if we call getsockname on our own socket.

We should add some API to Stream for accessing the local/remote addresses (#280). And on macOS getpeername is unreliable in general – so SocketListener needs to take the peername returned by accept and stash that in the SocketStream, instead of relying on getpeername like we do now. We can use a similar trick to make this API return the right addresses for Unix-domain sockets (i.e., the path that we know we connected or bound, not the path that the kernel returns from getsockname/getpeername).

Of course, that doesn't help for other non-Trio programs connecting to Trio-created Unix-domain sockets, but maybe that's not a big deal.

       # Using umask prevents others tampering with the socket during creation.
       # Unfortunately it also might affect other threads and signal handlers.

Yeah, I don't think we can afford to use umask... too disruptive. However, as long as we call chmod before we call listen, I think we're fine, because the only thing chmod is controlling is which processes are allowed to connect, and until we call listen, no-one is allowed to connect.

We do need to think carefully about race conditions here though... imagine this code gets run by root, to bind a socket in a globally writable directory like /tmp. Can someone mess us up in a dangerous way?

Tronic commented 5 years ago

Maybe it's simpler to just offer users a way to disable auto-deletion? (Assuming we support auto-deletion at all...)

It is certainly difficult to do right. The test connection might also have side effects on the accepting end (e.g. application logging messages about failed connection handshakes). And if enabled by default without sufficient checks, multi-processing applications would get very nasty bugs (socket unlinked whenever the first worker terminates / is restarted).

Fortunately there is a simple and fairly safe solution. Never automatically unlink, but provide a utility function for unlinking. A multi-processing application could choose when to call this function (e.g. when it initiates shutdown, even before the workers actually terminate), and the socket would still internally check that the inode number matches.

This leaves a race condition between stat and unlink, between which another instance might replace the socket, which I can imagine being an issue with restarts like server-app& kill $oldpidand kill $oldpid; server-app&, which are broken in any case. One should wait for the new instance to signal that it has started (usually by pidfile being created) before killing the old one:

  1. new instance replaces socket and then creates pidfile
  2. system daemon sees the new pidfile and kills the old instance
  3. old instance graceful shutdown: socket inode mismatch, no unlinking

Minor irritation with atomic rename approach: the original random socket name also appears in peername seen by clients.

Oh, huh, good point. And we'll also see it if we call getsockname on our own socket.

Considering unlink + bind without rename, isn't the downtime there extremely minimal? Any connections in old server's backlog stay there and a new socket is bound and listened within microseconds. Hardly any connections should appear within such window, and network clients have to have retrying logic in any case.

In particular, unlink/bind could work as a fallback on Windows if true atomic replacement is not possible.

njsmith commented 5 years ago

And if enabled by default without sufficient checks, multi-processing applications would get very nasty bugs (socket unlinked whenever the first worker terminates / is restarted).

I don't think I'm as worried about this as you are. Sharing a listening socket between processes isn't something you do by accident – it requires some careful work to get everything shared and set up properly. Even after we add an open_unix_listener helper, only the parent process will be able to use it – all the workers will need to convert a raw fd into a trio socket, manually wrap it in a SocketListener, etc. So we can assume everyone doing this is pretty familiar with these APIs, has read the docs, and is prepared to write a bit of code to say explicitly what they want. So if they also have to say autodelete=False, eh, it's not a big burden.

When choosing the defaults, we want to put more weight on doing something sensible that works for less sophisticated users, who don't know enough to pick their own settings.

Considering unlink + bind without rename, isn't the downtime there extremely minimal? Any connections in old server's backlog stay there and a new socket is bound and listened within microseconds. Hardly any connections should appear within such window, and network clients have to have retrying logic in any case.

Sure, but "definitely no connections are lost" is still better than "probably not many connections are lost" :-). The rename part is cheap and easy, and the only downside is the weird getsockname/getpeername issue. But that's extremely minor. I can think of any situation where you would use these to check the identity of a server socket – they're generally only used to identify clients. And even that doesn't make sense for Unix sockets, since clients are usually anonymous. So I'm not worried about using rename to create the socket. All the questions are about cleanup.

Tronic commented 5 years ago

I don't think I'm as worried about this as you are. Sharing a listening socket between processes isn't something you do by accident

Actually it is quite simple to do with the high level API (probably not using Trio as planned, again):

import os, posix, trio

listeners = trio.run(trio.open_tcp_listeners, 5000)
for l in listeners: l.socket.set_inheritable(True)

posix.fork()
posix.fork()

trio.run(
    trio.serve_listeners,
    lambda s: s.send_all(b"Hello, I'm %d\n" % os.getpid()),
    listeners
)

Also, I suppose that sharing a listener object actually is something quite easily done by accident, even if the socket is not inheritable, by some completely unrelated fork() -- but that would have bigger issues: trying to close a fd that doesn't belong to the process.

njsmith commented 5 years ago

(os and posix are the same module, no need to import both. Also your set_inheritable calls there aren't doing anything – "inheritable" controls what happens across exec. File descriptors are always inherited across fork.)

Sharing trio objects across multiple calls to trio.run isn't formally supported... I guess what you wrote will work right now in practice, but no guarantees. It doesn't look very accidental, either :-)

Also, I suppose that sharing a listener object actually is something quite easily done by accident, even if the socket is not inheritable, by some completely unrelated fork() -- but that would have bigger issues: trying to close a fd that doesn't belong to the process.

Much bigger issues. If you try to fork inside trio.run you'll end up with two processes trying to share the same wakeup fd, epoll fd, etc. It doesn't work at all. (Of course fork+exec is fine.)

njsmith commented 5 years ago

[sorry, I fat-fingered that and posted a partial comment. I've reopened and edited my comment above to fill in the missing bits.]

Tronic commented 5 years ago

(os and posix are the same module, no need to import both. Also your set_inheritable calls there aren't doing anything – "inheritable" controls what happens across exec. File descriptors are always inherited across fork.)

Dang! Looks like I accidentally mixed things from Nim language (where it is posix.fork()) and from Sanic's multiprocessing based approach, having recently used both to implement pre-forking servers.

Sharing trio objects across multiple calls to trio.run isn't formally supported... I guess what you wrote will work right now in practice, but no guarantees. It doesn't look very accidental, either :-)

I have to admit that I tried forking in async context first. As you can imagine, it didn't end well. Maybe should have used exec or trio processes, but I haven't found a convenient way to call Python code of the currently running program (determining how the program was started and trying to run a new interpreter etc. gets complicated).

Sanic is using a similar approach with asyncio -- calling loop.run_until_complete in small pieces, before and after spawning worker processes. I hope that this remains supported in Trio.

Btw, I have to admire the eye you have for detail. I've read your blogs about KeyboardInterrupt/signals and Cancellation, and I see the same attitude here with getting the UNIX sockets precisely right. Actually I originally found Trio when looking for information about cancellation in asyncio and was astonished by how the entire topic was apparently being ignored by everyone else. Needless to say, I was instantly convinced to switch and seeing bits of that kind of quality everywhere I look in Trio is certainly satisfying (the "little" details like making sure that SSL server_hostname actually gets verified, or making sockets NODELAY by default -- details unnoticed by most but they make a big difference as a whole). Keep up the good work! :)

njsmith commented 4 years ago

I have to admit that I tried forking in async context first. As you can imagine, it didn't end well. Maybe should have used exec or trio processes, but I haven't found a convenient way to call Python code of the currently running program (determining how the program was started and trying to run a new interpreter etc. gets complicated).

Unfortunately I don't think there's another reliable way option... you can fork before starting any async code (and possibly pass the sockets over to the child processes later), but even then you'll probably find yourself having to support spawning new processes to support Windows.

I'm pretty sure that if you don't change directories, don't change the environment, and run sys.executable, then that will reliably spawn a new python interpreter in the same python environment, for all cases except for when Python is embedded as a library inside some larger program (e.g. blender or libreoffice). Of course in those cases you can't really expect fork to work either... multiprocessing has the same problem, so it provides multiprocessing.set_executable, and folks who embed python but also want multiprocessing to work are supposed to call that during startup to give the location of a working python interpreter. You can fetch the value it will use by doing multiprocessing.spawn.get_executable(). (By default it's just the same as sys.executable.)

Sanic is using a similar approach with asyncio -- calling loop.run_until_complete in small pieces, before and after spawning worker processes. I hope that this remains supported in Trio.

Unfortunately, it's not supported in Trio. It creates huge complications in asyncio because when run_until_complete returns, it effectively freezes all the other running tasks in-place, so deadlines don't work, you have to figure out what you want to do with those tasks if the user never re-starts the loop, etc. And in Trio it's not even very useful, since the nursery system means you can't have tasks that outlive a call to trio.run anyway.

Btw, I have to admire the eye you have for detail.

Ah, thank you! :relaxed:

Tronic commented 4 years ago

(a quicker response that I'd like, and quite off topic too)

This makes me think that the sane solution is to accept that main is getting run (including the if __name__ == "__main__": block).

Fortunately I believe I can restructure the problem so that the first process starts normally but ends up opening the listening sockets, and then re-starting the whole program as child processes that also end up in the same place -- but instead of opening sockets, receive them pickled (and on Windows duplicated as is required) from the original one. This has the benefit that all state other than the socket is initialized in normal way in each child process, and won't need to be pickled at all.