Open Malvineous opened 5 years ago
@Malvineous If your docker container got killed while you docker run example
then, in that case, the restart policy won't actually restart your container since it is already has been killed before you restarted the daemon. From the doc,
Always restart the container regardless of the exit status, including on daemon startup, _except if the container was put into a stopped state_ before the Docker daemon was stopped.
As a suggestion, You should see the output of docker ps
just after you run your container docker run --restart=unless-stopped example
The Docker container does not get killed, it runs successfully for many days after starting it. The machine can reboot and the container will come up again after rebooting (or at least it did the last time I rebooted, which was on an earlier Docker version.)
Now, like I said in the 'steps to reproduce' above, the container starts successfully and can again run for many days, however once you restart the Docker daemon with systemctl
(without stopping the container yourself), it doesn't come back when Docker restarts.
--restart=unless-stopped
suggests that the container will restart so long as you don't stop the container with docker stop
, which I have not done.
I have the same problem. After upgrading from older version to 18.09.1 containers that were running up to the moment the system is rebooted are not started after system reboot.
As a workaround i changed the restart policy (docker update --restart=always:0 my-container) which works for me but it would be nice if this bug can be solved!
I have the same problem here, on Docker CE 19.03.2 The workaround mentioned by @Jangor67 is for the moment our solution, put the container as "restart always"
I think when you reboot, systemd will stop the docker service, which in turn takes care of stopping all containers. Perhaps that's why the container will have the "stopped" exit status and not get started when the docker service starts again after a reboot.
This might be the case, but then the bug is that when Docker is being shut down, it also marks all the containers as stopped even though the user did not request the containers be stopped. The containers should only be marked as stopped when docker stop
is run.
I hit a similar issue to this. The cause ended up being using docker kill --signal HUP <container>
, which Docker assumes will stop the container and marks it as not being restartable. I don't see that in your reproduction steps though, so could be a different issue.
Docker 19.03.4, CentOS 7.6
@markgoddard Using docker kill
implies you want the container stopped, so --restart=unless-stopped
would appear to make sense there. (Otherwise you would use --restart=always
and then the container would be restarted immediately after you ran docker kill
).
In the case of this bug, we haven't stopped the containers (just rebooted the system) but Docker is acting as if we have requested each container be permanently stopped, when we haven't made such a request.
I'm using SIGHUP, which doesn't typically kill a process. I found an old docker bug about the issue: https://github.com/moby/moby/issues/11065. Thanks for replying.
Having the same issue on Cent 7, Docker version 19.03.8.
Having this issue with Ubuntu 18.04 on system reboot. Some containers restart, some don't.
$ docker --version
Docker version 19.03.12, build 48a66213fe
I know this issue isn't related to windows but just wanted to say since this article pulls up when searching this issue on Google that I also have the issue when our Windows Server 2016 server gets rebooted.
+1 on Ubuntu 20.04 armhf with docker.io package
$ docker -v
Docker version 19.03.8, build afacb8b7f0
As a workaround I switched to docker-ce package from 18.04 repository (because armhf has no release candidate in the 20.04 repository as per #1035)
$ docker -v
Docker version 19.03.13, build 4484c46
Also receiving this issue on an Ubuntu 20.04 VM running inside ESXi, using docker.io
.
$ docker -v
Docker version 19.03.8, build afacb8b7f0
I tried upgrading to docker-ce
, as mentioned by @gimiki, using this official guide, and suddenly the problem was fixed: containers restart on reboot without further input.
I'm noticing this guide specifically says Older versions of Docker were called docker, docker.io, or docker-engine. If these are installed, uninstall them
. Does this mean the docker.io
package in the Ubuntu repositories is no longer supported, and using docker-ce
is now the preferred method for installing Docker on Ubuntu?
If so, perhaps the docker.io
package should be marked as deprecated, or some effort should be made to bring docker-ce
into the mainline Ubuntu repositories.
The cause ended up being using
docker kill --signal HUP <container>
, which Docker assumes will stop the container and marks it as not being restartable.
Thank you, this solved a problem I was having too. I use docker kill -sHUP to reload the configuration of a daemon, but I didn't realize it would cause Docker to assume the container should be in a stopped state!
While waiting for a fix, here is a workaround that doesn't break the restart policy
PID=$(docker inspect -f "{{.State.Pid}}" <container>)
kill -SIGHUP $PID
Enjoy :o)
I found this bug while researching a similar issue.
Containers set to restart (via docker compose) "unless-stopped" not restarting when the docker daemon reloads.
Twice now I have had the docker daemon abruptly exit due to a nil pointer condition, while I haven't been able to find a reason for that I was surprised that my containers were not coming back up when the daemon managed to restart.
I would have thought that the daemon failing for whatever reason would have resulted in the containers all being restarted.
as a bandaid I have set all my containers to a restart policy of "always"
Check from here in details Docker Restart Policy
Expected behavior
Launching a container with
--restart=unless-stopped
should restart the container when the daemon reloads.Actual behavior
Containers are not restarted. The docs say they will be restarted unless they are stopped first, but restarting the daemon doesn't imply changing any containers to the 'stopped' state. (e.g. in my case I just upgraded Docker, and the
docker
CLI couldn't communicate with the still-running old daemon version, so I wanted to restart the daemon. Doing so allowed the CLI to communicate with it again, but all my containers disappeared.)Steps to reproduce the behavior
Output of
docker version
:Output of
docker info
: