docker / compose

Define and run multi-container applications with Docker
https://docs.docker.com/compose/
Apache License 2.0
33.65k stars 5.19k forks source link

Hooks to run scripts on host before starting any containers #6736

Closed Jamie- closed 1 year ago

Jamie- commented 5 years ago

This is clearly a common problem lots of people have been facing (even since 2014, #468), there's pile of closed issues for similar functionality to be added before and they have been closed, I believe, entirely un-reasonably.

Please see #1341 for a very concise argument as to why this functionality is useful, and judging by the reactions to most of the comments, it is quite a popular feature the community would like added.

Now it's over 2 years since #1341 was closed I believe hook-like functionality should be reconsidered.

Is your feature request related to a problem? Please describe.

There are many examples in #1341 already but I'll add my most recent use case for this.

I have a number of containers that are spun up, using compose, for development which require a shared data directory. I also need to access that directory on my host. Inside each of my containers a Python program is started as a specific user (as to mimic production as accurately as possible). Currently I mount this volume on each of my containers in docker-compose like so:

volumes:
  - "/tmp/data-var:/var/data"

However /tmp/data-var doesn't exist on my host (this is a shared development project), so it's created by docker for me, as root. Therefore my Python programs, running as non-root, cannot write to it.

Before docker-compose up starts any containers, I'd like to call something like mkdir /tmp/data-var && chown +w /tmp/data-var. Then on docker-compose down after all containers are destroyed I'd like to remove the temp data directory rm -rf /tmp/data-var.

I understand this could be accomplished in other ways, please see the alternatives section below as to why these suck.

Describe the solution you'd like

I'd like to have two bash scripts, say pre-up.sh and post-down.sh and add them to be called via docker-compose with something like the following in my docker-compose.yml

version: "3"
pre-up: "./pre-up.sh"
post-down: "./post-down.sh"
services:
  service1:
    build: .
    volumes:
      - "/tmp/data-var:/var/data"

Other possible hooks people might find useful:

When calling these, compose should block at the specified point until the script has returned with an exit code of 0, and itself stop with a non-zero exit code if the script exits with a non-zero code.

Describe alternatives you've considered

There are alternatives for my example use case, and equally good reasons they're a bad fit.

1. Calling a script on container start

I could have an ENTRYPOINT ["start.sh"] which sets the correct permissions on the directory, then my Python run command be specified via CMD ["python", ...] and have start.sh finally call exec "$@". However this is a waste as the first container to get started, and every container restart after would do the same thing, it only needs to be done once before any containers start.

Equally, it wouldn't solve my post-down: "./post-down.sh" use case.

2. Wrapping it up in a different script

I could write a wrapper script that calls docker-compose up, as that's been suggested many times in other issues. Come on... we're all using compose because it's concise, neat, tidy and simple to use. Everything is specified in one place which makes it easy for beginners to understand and read what's going on. Compose itself is essentially a standard when using docker with multiple containers.

3. Compose events

Though my understanding of events is lacking, due to how complex it is for what I really want to do. This is a poor way to achieve the goal I described, just like many other issues that were raised, then all pointed to #1510 (compose events). Events are reactive, this needs to be proactive, but more importantly, events do not block, and for many people, like me, blocking is essential.

phedders commented 4 years ago

Good luck... with kubernetes. Docker inc doesnt care.

ndeloof commented 4 years ago

Volume permission is very common issue with docker, and as long as you use bind mounts you even tell the engine "I'm in charge for this one, just expose inside container" and get into obvious permission issues. Using named volumes, which are created with owner set to the first container to use them, would help solve this issue.

What you describe as a proposed solution is pretty comparable to Kubernetes init containers, this is something we should consider for a future version of docker-compose. Main constraint is that compoes file format is not only used by docker-compose so such a move will require some coordination with docker stack command and compose-on-kubernetes. cc @chris-crone

jdiegosierra commented 4 years ago

Would be nice to run different scripts before start any services, in my case I need create some folders and depends of the service I want to start.

TomaszGasior commented 4 years ago

@jdiegosierra Can't you just do this in entrypoint script?

jdiegosierra commented 4 years ago

@TomaszGasior I mean I have to create folders into my host. And they dependeds of the service. As far I understand how entrypoint.sh works, it is for run some commands into de container right?

TomaszGasior commented 4 years ago

@jdiegosierra If you need to create directories inside directories shared between host and container, you can create them inside container's entrypoint by wrapping original entrypoint into your own.

jdiegosierra commented 4 years ago

@TomaszGasior Yeah I know :D But is not my case...

My case is I'm sharing the folder of a project with the container except the dist folder. I want the container has its own dist folder and my host project its own dist folder in order to develop using docker or without it. So in my docker-compose I have this:

volumes:
  - ../../frontend:/opt/app
  - ../../frontend/dist
  - /opt/app/dist <-here is the problem

Also in my docker image I have this:

RUN groupadd appuser
RUN useradd -r -u 1001 -g appuser appuser
... build stuff
RUN chown appuser:appuser /opt/app -R

Into my container the dist folder has the appuser permissions and its builded files so its okey for me. The problem is outside the container. docker.-compose has created a folder called dist with root permissions so if i want to build my project in my host i cant because permissions. However if I create dist folder in my host with appuser permissions before start docker-compose all works as I want and dist folder in my host is empty with appuser permissions so I can build my project also in my host and it doesnt conflict with the dist folder into the container.

TomaszGasior commented 4 years ago

docker-compose has created a folder called dist with root permissions so if i want to build my project in my host i cant because permissions

As I understand, what you need is to create directory from inside of container's entrypoint but with permissions like it would be created from your host. If there is any directory inside your container with permissions from your host, you may want to use stat and chown.

Let's see my example. I have PHP application. composer, PHP package manager, creates vendor directory with all app dependencies. I want to run it from container entrypoint but with permissions like it would be ran from my host. Check it out: https://github.com/TomaszGasior/RadioLista-v3/blob/bf5692d3d767afcfa7c1ccf46109c4f653c85b1c/container/php-fpm/entrypoint.sh#L8

Basically what I am doing here is I ran composer command with permissions (user and group) the same as parent folder has. You may take some inspiration from this. For example, you may create folder with mkdir, then change its permissions by chown to permissions of different directory of your project which is created on your host — you can get them using stat.

jdiegosierra commented 4 years ago

Is not exactly what I need. I need to create a empty folder in my host called dist with appuser permissions before start the service in order to docker-compose doesnt create a dist folder with root permissions.

btw, I appreciate your help :)

TomaszGasior commented 4 years ago

Is not exactly what I need. I need to create a empty folder in my host called dist with appuser permissions before start the service in order to docker-compose doesnt create a dist folder with root permissions.

Or you may let docker create that folder with root permissions and then inside your entrypoint just fix that wrong permissions using method which I described. :) It's possible if you have any other directory created in host by your host user, available inside container. Just stat the second one and chown first one. Your host user don't have to exist inside your container — chown will accept non-existing user/group ID returned by stat.

Something like: chown $(stat -c '%u:%g' /from-host) /in-container.
/from-host — directory created by your user in host OS, /in-container — directory created by dockerd with wrong permission.

Ibsardar commented 3 years ago

@jdiegosierra If you need to create directories inside directories shared between host and container, you can create them inside container's entrypoint by wrapping original entrypoint into your own.

Just a side thought - why should I research what is the ENTRYPOINT of the original image in order to prepend/append my own set of runtime commands? Shouldn't it be easier? Something like:

BEFORE_ENTRYPOINT echo "before"
AFTER_ENTRYPOINT echo "after

So if the original ENTRYPOINT is:

ENTRYPOINT echo "original"

Then the next container that uses this container would have the following value for ENTRYPOINT:

echo "before"
echo "original"
echo "after"
jvasile commented 3 years ago

I have a script that reads docker-compose.yaml and makes adjustments based on a few factors, then writes it back out. I basically want to filter docker-compose.yaml and pass an adjusted version to docker-compose. So far there is no elegant way to do this. A set of pre-up and post-down scripts would do nicely.

metaskills commented 3 years ago

@Ibsardar Gave me an idea. For my application service I used this:

version: "3"
services:
  myapp:
    entrypoint: ./bin/entrypoint

And I typically have bin scripts like console, server, and test which are docker-compose run myapp ./bin/_test wrappers. So this technique via the below bin/entrypoint file was a nice way for me to do some pre work before running the other scripts which now remain unchanged.

#!/bin/sh

# Do some blank AWS environment checking, etc...

# Run the orig script. Server, console, test, etc.
$@
peter-vanpoucke commented 3 years ago

Would also be great if it could set environment variables in the up script. And then later on use these to further configure the services.

jvasile commented 3 years ago

Maybe this is useful to some folks: https://github.com/jvasile/docker-wrap

It lets you define a pre-up in docker-compose.yml. Patches welcome!

fsevenm commented 3 years ago

Would also be great if it could set environment variables in the up script. And then later on use these to further configure the services.

That would be a great feature to have officially.

fsevenm commented 3 years ago

@Ibsardar Gave me an idea. For my application service I used this:

version: "3"
services:
  myapp:
    entrypoint: ./bin/entrypoint

And I typically have bin scripts like console, server, and test which are docker-compose run myapp ./bin/_test wrappers. So this technique via the below bin/entrypoint file was a nice way for me to do some pre work before running the other scripts which now remain unchanged.

#!/bin/sh

# Do some blank AWS environment checking, etc...

# Run the orig script. Server, console, test, etc.
$@

I think this issue is about adding hooks to run scripts before starting any containers, but the entrypoint option is to run scripts after containers started.

Enerccio commented 2 years ago

@jdiegosierra If you need to create directories inside directories shared between host and container, you can create them inside container's entrypoint by wrapping original entrypoint into your own.

Just a side thought - why should I research what is the ENTRYPOINT of the original image in order to prepend/append my own set of runtime commands? Shouldn't it be easier? Something like:

BEFORE_ENTRYPOINT echo "before"
AFTER_ENTRYPOINT echo "after

So if the original ENTRYPOINT is:

ENTRYPOINT echo "original"

Then the next container that uses this container would have the following value for ENTRYPOINT:

echo "before"
echo "original"
echo "after"

why is this NOT A THING? Like wtf...

DesignByOnyx commented 2 years ago

+1 - I have more use cases if anybody needs them. I really like the idea of following the pattern of init containers so things can be reused in k8s and my app logic can be rid of bulky startup logic.

If we go the hook route, I'd like to suggest that all hooks are scoped under a "hooks" key which can be global, per-profile, and per-service:

version: '3'
hooks:
  // global hooks
  __profiles__:
    dev:
       // profile hooks
services:
  bar:
    profiles:
      - dev
    hooks:
      // service hook

I'd also like to suggest better semantics as pre-up and post-up are a bit ambiguous (eg. I'd expect post-up to happen when the container is no longer up - I point to package.json script semantics for this confusion). Maybe consider something like:

joaopfg commented 2 years ago

In my use case, I need to write the Dockerfile dynamically (depending on the setup of the machine who wants it) before docker-compose builds the image. I'm currently using a bash script that envelops the creation of the Dockerfile (using another bash script) and the images build/containers launch (using docker-compose).

It would be nice to do it using a pre-build, like is mentioned in the message above. That's because I'm launching this procedure together with many other procedures (on many computers) with Ansible. All the other procedures have their launch possible using only docker-compose. Only this particular procedure needs to be run with an envelop bash script, which is annoying.

fly-studio commented 2 years ago

I forked this project and added the feature of HOOK:

https://github.com/fly-studio/docker-compose

It supported

PavelNiedoba commented 1 year ago

I forked this project and added the feature of HOOK:

https://github.com/fly-studio/docker-compose

It supported

* Run command before/after starting containers

* Global hook

* Scoped hook for service

Anybody merging it? People asking for it since 2015

ndeloof commented 1 year ago

I'm in favor for container hooks, comparable to Kubernetes lifecycle hooks, as this could cover many use cases, like the typical "intialize database with dataset". Anyway that should be discussed under the compose specification first see https://github.com/compose-spec/compose-spec/issues/84

I'm way more reticent for global pre-run scripts, especially running those on host: this brings both security and portability concerns.

ndeloof commented 1 year ago

I created a proposal on this topic (at least, partially) feel free to comment/suggest changes https://github.com/compose-spec/compose-spec/pull/289

michaelkrieger commented 1 year ago

I created a proposal on this topic (at least, partially) feel free to comment/suggest changes compose-spec/compose-spec#289

Your PR runs within the container. This is more for on the host

my comment there:

Having the hooks run -outside- the container is more useful. Having pre/post is also more useful than “on”.

it adds the use cases:

Overall doing it out of the container gives you options of both in and out of the container. Hooks should happen on start, stop, restart, and maybe for completeness, remove/create. Hooks should happen on the host system (and using docker exec could be run within this or another container).

Hooks should return 0 on success. Pre-startup hooks should prevent startup if non-zero. Post-startup should throw a warning only. All stop hooks should throw a warning only. Hooks only within the container and not on the host is just an incomplete implementation.

Would also like to see a pre_start/post_start, pre_stop/post_stop, and pre_restart/postrestart vs the ambiguous on commands. Lots of use cases when run on the host system.

ndeloof commented 1 year ago

I'm aware my proposal only partially address this issue. it is heavily inspired by kubernetes lifecycle hooks, which is a proven solution

I'm 👎 on running local commands from compose, as this would break portability for compose files. The usages you listed are obviously legitimate usages, but those should be addressed with a distinct approach.

michaelkrieger commented 1 year ago

I'm 👎 on running local commands from compose, as this would break portability for compose files. The usages you listed are obviously legitimate usages, but those should be addressed with a distinct approach.

I’m, on the other hand, very for it. It’s an extremely useful addition. It’s also what this thread/issue is about, as it explicitly says “host” in this issue.

ndeloof commented 1 year ago

again, I'm not saying https://github.com/compose-spec/compose-spec/pull/289 intent is to fully support this feature request. This just sounds to me a reasonable addition on this topic. About running host commands, I won't support this feature request. Maybe other maintainers will see some benefits.

benitogf commented 1 year ago

Your PR runs within the container. This is more for on the host

I agree that this issue is more about setting up the host, such as creating folders that will be mounted by containers if they dont exist, currently I keep a separated script and have to upload to the machine both the docker-compose.yml file and docker-pre-compose.sh, not sure about how would this fit with the containers lifecycle tho, since theres no lifecycle for the group of containers

eg. I have a setup script and I need to run that only once (first time run of the compose file) but if I add another container and modify the setup script I would need this to run again.

Maybe it would make sense to allow having the script defined on the docker-compose.yml file but leave when to run it to the user with a separate command such as docker-compose setup

ndeloof commented 1 year ago

@benitogf such "setup" scenario can also be supported using an entrypoint script. Up to you to make this one idempotent, so running it after first command doesn't impact the deployment Alternatively, you can also define an "init container", i.e. a container that will ru before your application container(s), doing all required setup. Application container with a depends_on: condition: service_completed_successfully directive will only start after this script completed.

DesignByOnyx commented 1 year ago

I'm 👎 on running local commands from compose, as this would break portability for compose files.

I don't really buy the whole "breaking portability for compose files" - it seems like a premature assessment (with all due respect). I am needing this feature for the sole purpose of making sure our software runs consistently across different devices: initializing and validating configurations, checking for requisite artifacts, etc. If someone writes a hook that truly "breaks" portability, that should be the responsibility of the maintainers/consumers to fix... just like a bug. If it's a private internal compose file, the employees of the organization should fix it. If it's a public compose file, the community and maintainers should fix it. This feature shouldn't be held hostage by the ideologies of a select few. You're effectively saying "we don't trust people to write and maintain portable compose files" when in fact most people are striving for portability and they should be enabled to do so.

Here are some other things which currently "break portability" in docker:

I could keep going, but there's already a list of non-portable characteristics of docker compose... many of them host-releated. Allowing people to run host scripts before/after the containers only serves to improve portability in 99.99% of cases.... pragmatically speaking.

For example, my company maintains windows and linux containers and I am constantly having to switch modes. Occasionally, I'm in the wrong mode and it sometimes takes minutes before an error happens because entire images have to be downloaded before docker can report "wrong OS". We could literally save hours of developer time and gigabytes of internet traffic and disk space if we could run a little script to make sure docker is in the right mode. You might say that we should build and maintain our own proxy scripts, but this "breaks protability" much more than a simple hook IMO.

ndeloof commented 1 year ago

By "portability" I mean you can't define a command to run on host without either forcing a specific OS (if you declare a sh script) and/or prerequisite for some tools to be available on user machine. I'm not saying there's no use-case for this. Docker and Compose success is based on ability for anyone to run random software once they have been packaged as docker images. IMHO any setup/check script you need to run before your application can also be packaged as a docker image, and ran as an init container - while this requires some more configuration (until we introduce support for init container in compose) this would cover most of the needs.

Windows vs Linux container is another category of challenge. I don't have a good answer for it, but use of platform attribute to fail fast.

ghnp5 commented 1 year ago

I really very much would love this too.

Especially to deal with permissions on mounted volumes and that kind of stuff, when moving from one server to another, I want to just be able to do docker-compose up -d, and not have to remember to find any commands/scripts prior to that.

I keep wishing for a feature of docker-compose that simply runs a few commands for those specific cases.

Thanks! ✌🏼


Related: https://github.com/moby/moby/issues/2259

ndeloof commented 1 year ago

Closing this issue as the compose team agreed we don't want to support running commands on host as part of Compose lifecycle. We'd like to further investigate support for init-containers, which would help support some of the reported use-cases

osher commented 4 months ago

pitty.

Here's the use-case you're missing:

Many build setups - esp. those of web-apps - would love to run the bundle stages outside the docker, saving the need of multistage dockerfiles where an unecessary and heavy part for example is npm/yarn install.

for front-end - all the build cycle does not have to be a part of the docker build because there isnt' any platform-dependent component that has to match the image guest OS. install, test, lint, bundle/build - all are done frequently on the developer host as their frequent dev cycle, and can be reused from there, where the dockerfile only has to COPY/ADD the resulting dist to an already hardened production image.

Same for applications that use backends that serves also the static files of front end. The docker can install the backends dependencies and all, but it does not have to work on the static files as these can be provided by the host.

currently the solution is to wrap it with makefile or taskfile or the rest of scriptology solutions of that sorts.

I would love to get rid of them, if there only was a canonical way to express these hooks in docker-compose...

benitogf commented 3 months ago

pitty.

Here's the use-case you're missing:

Many build setups - esp. those of web-apps - would love to run the bundle stages outside the docker, saving the need of multistage dockerfiles where an unecessary and heavy part for example is npm/yarn install.

for front-end - all the build cycle does not have to be a part of the docker build because there isnt' any platform-dependent component that has to match the image guest OS. install, test, lint, bundle/build - all are done frequently on the developer host as their frequent dev cycle, and can be reused from there, where the dockerfile only has to COPY/ADD the resulting dist to an already hardened production image.

Same for applications that use backends that serves also the static files of front end. The docker can install the backends dependencies and all, but it does not have to work on the static files as these can be provided by the host.

currently the solution is to wrap it with makefile or taskfile or the rest of scriptology solutions of that sorts.

I would love to get rid of them, if there only was a canonical way to express these hooks in docker-compose...

Think that for your use case the multiple stage build should work, this is a dockerfile I use for a web app:

# build stage
FROM node:12-alpine AS build-env
# make npm install run only when there are changes to 
# package.(lock).json
COPY dashboard/package.json /tmp/
COPY dashboard/package-lock.json /tmp/
RUN cd /tmp && npm install --legacy-peer-deps
ADD dashboard /src/dashboard
RUN cp -a /tmp/node_modules /src/dashboard/
RUN cd /src/dashboard && npm run build

# final stage
FROM alpine
RUN apk add nginx
RUN mkdir -p /var/lib/project/dashboard
RUN mkdir -p /var/log/project
WORKDIR /app
COPY --from=build-env /src/dashboard/build /var/lib/project/dashboard
COPY --from=build-env /src/dashboard/nginx.conf /etc/nginx/http.d/default.conf
CMD ["nginx", "-g", "daemon off;"]
EXPOSE 80

What is not covered is a way to add directory creation or permission settings on those for containers that use the filesystem of the host to mount from the compose file and those can't be part of the container build process, we have to carry around a script to do that, separate from the docker-compose file 😢

eusouoviana commented 3 months ago

Unbelievable, something SO SIMPLE still needs to exist in such an essential program like Docker.

s17534 commented 1 week ago

As a temporary workaround we can use systemd ExecStartPre= directive for docker.service file.

In my example I used:

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=-/etc/default/docker
ExecStartPre=/usr/bin/mkdir /dev/shm/myDir
ExecStartPre=/usr/bin/chown 1007:1006 /dev/shm/myDir

to edit this file, type systemctl edit --full docker.service as root or exec it with sudo.

Now every time after docker host is done rebooting and docker service has been started, I have my needed directory setup the way I need to.