Open that0n3guy opened 5 years ago
It'd only make sense if the base image was using it by default to manage multiple processes wouldn't?
If you are wanting to use it, are you adding anything else for supervisord to manage? I've only really seen it in php images that include an additional process like nginx to manage in a single image. Docker itself will keep the main process running just fine without supervisord, and you can configure restart behaviour and all that with Docker.
If you need to ensure proper pid 1 handling, to avoid zombies tini covers that(and is available in Docker via --init
iirc), although these images already bundle tini in the dockerfiles.
Be aware that adding supervisord requires python2, which afaik can add at least 40MB to the image weight.
I think it make sense because the image includes a cron alternative, which is used to run multiple processes within the container.
My example is using laravel's queue watcher. Its often run with supervisord to keep it going if it bombs out. The app I'm deploying uses queues and it makes sense to use it with supervisord.
supercronic is about 12MB in weight. There's go-supervisord for a binary golang version of supervisord, albeit it's not necessarily 1:1 parity with the python one, it's about 10MB(or 15MB statically linked). s6-overlay is like ~3MB.
Maybe you can get one of those on the fat images, or in the meantime, extend your preferred image with your own Dockerfile based off one here. Or fork the repo and send a PR?
The Laravel queue example sounds a fair enough case, thanks for sharing it, not sure that 40MB would get pulled in if the majority of users haven't a need for it though?
Yeah, I'm fine with whatever. To match the rest of the containers features, it probably should be setup so it can be setup via env vars.
I've also used runit in here https://github.com/phusion/baseimage-docker (I don't know the size of it though) but I think its uber small.... but not really the same thing, close though.
runit seems to be <300KB.
Winner winner chicken dinner. I think its designed to be setup as process 1... so its beyond me to get into a container...
Bumping this, I've run into another instance where I need a process manager.
Bump again... Running a laravel project that would use this.
I think it's bad idea because if one process failed or require more resources, the manager (like kubernetes scheduler) can't identify witch one to do magical thinks (like move the process in an other machine). See : https://docs.docker.com/config/containers/multi-service_container/
Instead of, you can create a separate container for each process. You can use for that the cli image.
But i know it's little bit difficult in a small infrastructure and useless without scheduler. If you want to create a PR who implement a supervisor, i will merge it. Some recommendations :
I think it's bad idea because if one process failed or require more resources, the manager (like kubernetes scheduler) can't identify witch one to do magical thinks (like move the process in an other machine).
If you run multiple long-lived processes in a container it's often good to have a process manager handle it. For example we use supervisord
in https://github.com/docker-mailserver/docker-mailserver which bundles many logical services into a single container.
When you have a logical scope it can be ok to have multiple processes. It's better to split out when the units themselves are suitable for isolation absolutely. With the mail server project I referenced, one of the project goals that differentiate it from others though is that we have it all in one container, sort of like if you installed it without containers. It caters to a different set of users, for users that need more granular choice in what is used and ability to scale and orchestrate containers across multiple servers with other requirements, we have competitors :sweat_smile:
I don't have time to look through all the Dockerfile and entrypoint of this project (I don't use it myself), so I'm not sure if there is benefit in using process manager by default.
On the project I linked, one of the maintainers mentioned it was important to have dumb-init
or tini
as the PID 1 and entrypoint, which we do with dumb-init
. That then calls supervisord
which will launch a service to run our actual entrypoint script. Running a script before that as ENTRYPOINT that eventually calls dumb-init
/tini
apparently wasn't the correct approach to them for it to work correctly.
@that0n3guy You don't need a supervisord to run laravel queue
This is what I'm using in production for one of my projects
docker-compose.yml
version: '3.8'
volumes:
db:
x-app-common: &app-common
image: ${APP_VERSION}
environment:
DB_HOST: db
DB_DATABASE: laravel
DB_USERNAME: laravel
DB_PASSWORD: laravel
services:
app:
restart: always
<<: *app-common
healthcheck:
test: [ "CMD", "curl", "-s", "-f", "-i", "http://localhost" ]
interval: 20s
timeout: 10s
start_period: 15s
retries: 10
depends_on:
- maintenance
worker:
<<: *app-common
deploy:
mode: replicated
replicas: 2
restart: always
command: php artisan queue:work --timeout=3000 --tries=3
depends_on:
db:
condition: service_healthy
app:
condition: service_healthy
db:
image: bitnami/mariadb:10.5
restart: always
environment:
MARIADB_ROOT_PASSWORD: laravel_root
MARIADB_DATABASE: laravel
MARIADB_USER: laravel
MARIADB_PASSWORD: laravel
volumes:
- db:/bitnami/mariadb/data
healthcheck:
test: [ 'CMD', '/opt/bitnami/scripts/mariadb/healthcheck.sh' ]
interval: 15s
timeout: 5s
retries: 6
start_period: 15s
It would be nice to have something similar to http://supervisord.org/introduction.html for keeping processes running.
I know I can add this by quickly building my own image, but this is default in a lot of other images I'm using and it would be nice just to have this.
Thanks!