Open shyim opened 1 year ago
Interesting idea. I don't see why not
This might warrant a new issue, but I feel like it's relevant enough to ask here.
There's a feature in systemd and (and inetd) called "socket activation" where you let the init system listen on a socket for you, and provide it to the process either via stdin or file descriptor 3. Socket activated services usually also comes with some kind of timeout watchdog that will automatically close the process when there's no activity.
I suspect a lot of the people who are self hosting the atuin server are hosting it just for themselves or maximum a few people, so I think it would make sense to add this feature in order to save resources. There isn't a constant flow of internet traffic every single minute, and the process doesn't need to be spinning all the time.
Is this something that sounds within the scope of the project?
I don't think the atuin server idle memory usage is all that much (couple megabytes at most).
Idle postgres connection pool needs no active handling and the server will just be sleeping on the OS until a new connection is opened.
That being said, it does seem like it could be easily achieved. Replacing https://github.com/atuinsh/atuin/blob/2b1d39e270cb28e68403ba1a909378a6920b2208/atuin-server/src/lib.rs#L68 with https://docs.rs/hyperlocal/0.8.0/hyperlocal/trait.UnixServerExt.html#tymethod.bind_unix seems like it would do the trick.
I don't think we'd accept adding this directly but we might make it possible for users to modify their own servers to make it work. No promises here
I don't think the atuin server idle memory usage is all that much (couple megabytes at most).
I'm not really concerned with the memory. I'm more interested in the fact that the OS have do scheduling and context switching for this process only for it to instantly yield. While atuin is certainly not the biggest offender, and the OS scheduling algorithm can remediate it quite well, it can quickly add up when you're running a bunch of services like this.
I don't think we'd accept adding this directly but we might make it possible for users to modify their own servers to make it work.
That's okay. For anyone else finding this issue, systemd provides a tool for doing this for services that doesn't support it natively. While not ideal, it's a solution.
I'd be open to reviewing a PR implementing this, but we don't have the time to prioritise it now
the OS have do scheduling and context switching for this process only for it to instantly yield
If there's no active connections, the runtime of our HTTP server waits on an epoll on the TcpListener. I see no reason a competent OS would need to spuriously wake up the threads.
I can probably strace it to verify but I wouldn't be concerned about any idle performance unless you can prove it isn't idling properly. If so, I'd probably try to fix that
EDIT:
strace
confirms. When there's no connections, it waits for the OS to receive a connection before any threads get re-scheduled. This is not an issue and punting the TCP logic to systemd is just extra work.
I would like to run my own server behind a proxy (Caddy). To make this setup cleaner, i would prefer a socket file instead of a port (of course configurable).