Open airblade opened 2 weeks ago
The last line of the stacktrace points here:
Hmm, maybe this is caused by Type=notify
instead of Type=simple
? I don't know.
I'd like to keep the socket activation so I can have graceful restarts.
The other benefit is of socket activation is binding to a privileged port (which Caddy was taking care of previously).
If Thruster isn't compatible with socket activation, how I can run it with systemd without socket activation?
I tried to run it without socket activation (by commenting out Requires=puma.socket
in my service file and removing the puma.socket
file) but, although it successfully starts, it doesn't bind Thruster to 443.
Aha! Adding this to puma.service
allows Thruster to bind to 443:
AmbientCapabilities=CAP_NET_BIND_SERVICE
OK, so I can run Thruster via systemd without socket activation. Yay!
Will Thruster accept an active socket from systemd? No worries if not. (I don't know Go but as far as I can tell from the code, it won't.)
Yesterday I was trying to use socket activation with port 443, which I couldn't get to work.
Today I thought maybe it makes more sense to give Puma an activated socket. So I have been trying to use 3000 as an activated socket. This required getting Thruster to call bin/puma
instead of bin/rails server
so I could pass it --bind-to-activated-sockets
.
However I always get Address already in use - bind(2) for "0.0.0.0" port 3000
.
Puma's socket activation docs say:
Any wrapper scripts which exec, or other indirections in ExecStart may result in activated socket file descriptors being closed before reaching the puma master process.
Does this apply to Thruster? I think it does an exec but my Go's not good enough to be sure.
Hello!
Usually I deploy Puma behind Caddy, communicating over a socket and using systemd's socket activation. This allows graceful, zero-downtime restarts when deploying a new version of the code.
I'm trying to replace Caddy with Thruster. Ideally I'd like to keep the socket activation so I can have graceful restarts. (I'm not using containers or Kamal; instead I git-push to my production server where a git hook reloads/restarts everything.)
However I can't quite get it to work. My question is: is it actually possible for Thruster to accept a socket from systemd? If not, I'll stop :)
This is what I've got so far:
/etc/systemd/system/puma.service
``` [Unit] Description=Puma HTTP Server After=network.target Requires=puma.socket [Service] Type=notify NotifyAccess=all WatchdogSec=10 User=deploy Group=deploy WorkingDirectory=/var/www/fooapp ExecStart=/var/www/fooapp/bin/thrust /var/www/fooapp/bin/rails server Restart=always Environment=MALLOC_ARENA_MAX=2 Environment=RAILS_MASTER_KEY=... Environment=RAILS_ENV=production Environment=RACK_ENV=production Environment=WEB_CONCURRENCY=2 Environment=RAILS_MAX_THREADS=3 Environment=PUMA_MAX_THREADS=3 Environment=TLS_DOMAIN=fooapp.com StandardOutput=append:/var/www/fooapp/log/rails-out.log StandardError=append:/var/www/fooapp/log/rails-err.log # This will default to "bash" if we don't specify it SyslogIdentifier=puma [Install] WantedBy=multi-user.target ```/etc/systemd/system/puma.socket
``` [Unit] Description=Puma HTTP Server Accept Sockets [Socket] ListenStream=443 # Socket options matching Puma defaults NoDelay=true ReusePort=true Backlog=1024 [Install] WantedBy=sockets.target ```The puma.socket seems to start up fine but puma.service doesn't.
The stdout log looks normal but there's a big stacktrace in stderr:
This is all on Ubuntu 24.04 LTS with Thruster 0.1.4, Puma 6.4.2 and Ruby 3.3.3.