Closed mcdax closed 2 years ago
/run
is normally a tmpfs, its contents disappearing between two container invocations. Chances are you have garbage in /run
when your restart your container, that prevents proper operation of the second invocation.
Please try cleaning up /run
(rm -rf $container_root/run/*
) before restarting and tell us if it fixes the issue.
(If it does, we'll change s6-overlay so it cleans up after itself. If it doesn't, it means there's a deeper problem.)
Thank you for your answer. After cleaning up /run I can finally restart the container!
But there is the following output: docker container restart container123
container123 | s6-supervise s6-linux-init-shutdownd: warning: unable to write status file: No such file or directory container123 | s6-supervise s6rc-oneshot-runner: warning: unable to write status file: No such file or directory container123 | s6-supervise proxy: warning: unable to write status file: No such file or directory container123 | s6-supervise s6rc-oneshot-runner: warning: unable to read supervise/death_tally: No such file or directory container123 | s6-supervise s6-linux-init-shutdownd: warning: unable to read supervise/death_tally: No such file or directory container123 | s6-supervise s6rc-oneshot-runner: warning: unable to write status file: No such file or directory container123 | s6-supervise s6-linux-init-shutdownd: warning: unable to write status file: No such file or directory container123 | s6-supervise proxy: warning: unable to read supervise/death_tally: No such file or directory container123 | s6-supervise s6rc-oneshot-runner: warning: unable to write status file: No such file or directory container123 | s6-supervise s6-linux-init-shutdownd: warning: unable to write status file: No such file or directory container123 | s6-supervise proxy: warning: unable to write status file: No such file or directory container123 exited with code 0
proxy: my long running service
Yeah, you can't cleanup /run
while the container is running, else the supervision tree gets confused (and you get the errors you're experiencing).
You have to stop the container, then cleanup /run
from the outside, then start the container again. Really, /run
is not supposed to survive reboots - or, in the container case, restarts.
We will modify s6-overlay so the cleanup is performed automatically, but nevertheless, you should make /run
a tmpfs.
Mounting /run as a tmpfs (with docker-compose) results in a fun error.
/package/admin/s6-overlay-3.0.0.0-1/libexec/stage0: 77: exec: /run/s6/basedir/bin/init: Permission denied
Don't mount it with the noexec
option. I know a lot of distributions use noexec
as default for /run
"for security reasons 🤡" but that's a mistake, it brings nothing security-wise.
My /run isn't noexec
.
tmpfs /run tmpfs rw,nosuid,nodev,size=26384000k,nr_inodes=819200,mode=755 0 0
Now that's interesting...
I would need access to all the details of your container setup in order to investigate what's going wrong. That may be a little difficult. Just in case, please check that /package/admin/execline/command/execlineb
is executable, but it should definitely be... Short of that, I don't have any easy answers.
In any case, I just pushed a version that performs a better /run
cleanup at container start time, so if you can't mount /run
as a tmpfs, things should still work. Only in the source for now, but we'll tag another release towards the end of the week.
/package/admin/execline/command/execlineb
is executable as expected.
If it helps, I have images available of the project in question, as well as the matching docker-compose.yml I posted above. https://github.com/TheReverend403/gentoogram-bot/pkgs/container/gentoogram-bot
I just pulled and built s6-overlay and it seems to work now!
I have a lot of "linuxserver"-containers running that are using s6-overlay as init / superviser system and I never had to mount /run as tmpfs. Why is that? (Sorry, I'm not that experienced in this field)
Previous versions of s6-overlay used other directories, mostly under /var
, which is similar but not quite the same semantically, and is not mounted as a tmpfs (it must be persistent storage). Version 3 of s6-overlay stores its persistent data into persistent storage, but also stores its ephemeral data into ephemeral storage, e.g. subdirectories of /run
.
@TheReverend403 How did you get your tmpfs /run tmpfs rw,nosuid,nodev,size=26384000k,nr_inodes=819200,mode=755 0 0
line? All my attempts to make Docker mount a tmpfs ended in it being noexec
, which is infuriating. And the volume-opt
option does not work with --mount type=tmpfs
.
I don't know what docker-compose
does exactly, but it's a layer of complexity I don't want to dive in before solving the problem with a pure docker run
.
In any case I've been unable to mount a tmpfs without the noexec
flag, and tmpfs being noexec is the only explanation I can find for the exec: /run/s6/basedir/bin/init: Permission denied
error.
@jprjr, would you happen to have any idea of how to force Docker to do this?
@skarnet My bad, I thought you were talking about /run
in the host being noexec. Inside the container is whatever defaults Docker is using, so yeah noexec.
That said, I have found a solution.
For docker-cli, --tmpfs /run:exec
. For docker-compose tmpfs: /run:exec
.
Oh, so it is possible to make Docker mount an exec tmpfs then. Excellent news, thanks!
Closing this since the current git makes s6-overlay work even without /run
as a tmpfs; please reopen if the issue persists.
Hi, I just started playing around with s6-overlay (3.0.0.0-1) and I face the issue that I'm not able to restart my container:
s6-rc-init: fatal: unable to supervise service directories in /run/s6-rc/servicedirs: s6-svscan not running on /run/service container exited with code 130 (or 111)
I have just one longrun service (sleep 500).
In order to restart the container I have to recreate it each time.
Do you have any idea? I don't find any new logs after trying to restart the container.
Thanks in advance!