Open Gabb1995 opened 1 year ago
So by default docker is under this rule
Docker has limited patience when it comes to stopping containers.
There’s a timeout, which is 10 seconds by default for each container.
If even one of your containers does not respond to SIGTERM signals, Docker will wait for 10 seconds at least.
There is a flag at least for docker where timeouts can be lowered, if you need a shorter time to stop them you should look into that one and if docker-compose supports that flag in the config.
But the core issue is bash and that this container runs a bash script that runs all redis servers.
Normally, bash will ignore any signals while a child process is executing.
Starting the server with & will background it into the shell's job control system,
with $! holding the server's PID (to be used with wait and kill).
Calling wait will then wait for the job with the specified PID (the server) to finish, or for any signals to be fired.
I guess adding in a SIGTERM trap should not be that difficult to exit out the container more correct
I just corrected this in my own usage using dumb-init
and exec
ing the tail
. Seemed to work well.
As can be seen by the image above when running docker-compsoe down, it always takes a bit more than 10 seconds to "down" the container. Other containers are much faster.
Just want to maybe start a discussion on why this is happening? Is it on purpose, is it a bug?