docker / compose

Define and run multi-container applications with Docker
https://docs.docker.com/compose/
Apache License 2.0
33.22k stars 5.14k forks source link

Execute a command after run #1809

Closed ahmet2mir closed 8 years ago

ahmet2mir commented 8 years ago

Hi,

It will be very helpful to have something like "onrun" in the YAML to be able to run commands after the run. Similar to https://github.com/docker/docker/issues/8860

mongodb:
    image: mongo:3.0.2
    hostname: myhostname
    domainname: domain.lan
    volumes:
        - /data/mongodb:/data
    ports:
        - "27017:27017" 
    onrun:
        - mongodump --host db2dump.domain.lan --port 27017 --out /data/mongodb/dumps/latest
        - mongorestore -d database /data/mongodb/dumps/latest/database

After the mongodb start, It will dump db2dump.domain.lan and restore it.

When I will stop and then start the container, onrun part will no be executed to preserve idempotency.

EDIT 15 June 2020

5 years later, Compose wan't to "standardize" specifications, please check https://github.com/compose-spec/compose-spec/issues/84

omeid commented 6 years ago

@reduardo7 Then you might as well drop docker-compose altogether, that way have one less dependency.

reduardo7 commented 6 years ago

@omeid , you are right! It's a workaround to perform a similar task, sorry!

omeid commented 6 years ago

@reduardo7 No need to apologize, what you have posted is probably going to be useful to some people. I was just pointing out that original issue still stands and shouldn't have been closed. :)

jiunbae commented 6 years ago

I understand @dnephin's stands, the functions mentioned here can be replaced with sufficiently different features.

However, if such patterns are used frequently, how about presenting a guide(or some test) so that others can easily use it?

There seems to be no disagreement that this pattern can be used frequently.

omeid commented 6 years ago

@MaybeS The only disagreement is that @dnephin rather see his dopey tool promoted instead of helping make docker-compose a better product.

dantebarba commented 6 years ago

@omeid yes indeed.

SvenDowideit commented 6 years ago

today's example of wanting a way for compose to do some form of onrun

version: "3.3"
services:
  gitlab:
    image: 'gitlab/gitlab-ce:latest'
    restart: always
    hostname: 'gitlab'
    environment:
      GITLAB_OMNIBUS_CONFIG: |
        # NOTE: this URL needs to be right both for users, and for the runner to be able to resolve :() - as its the repo URL that is used for the ci-job, and the pull url for users.
        external_url 'http://gitlab:9090'
        gitlab_rails['gitlab_shell_ssh_port'] = 2224
    ports:
      - '9090:9090'
      - '2224:22'
  gitlab-runner:
    image: gitlab/gitlab-runner:latest
    volumes:
    - /var/run/docker.sock:/var/run/docker.sock

and of course, the runner isn't registered - and to do that, we need to

  1. pull the token out of the database in gitlab
  2. run register in the runner container

so instead of defining the deployment of my multi-container application in just docker-compose, I need to use some secondary means - in this case... docs?

export GL_TOKEN=$(docker-compose exec -u gitlab-psql gitlab sh -c 'psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production -t -A -c "SELECT runners_registration_token FROM application_settings ORDER BY id DESC LIMIT 1"')
docker-compose exec gitlab-runner gitlab-runner register -n \
  --url http://gitlab:9090/ \
  --registration-token ${GL_TOKEN} \
  --executor docker \
  --description "Docker Runner" \
  --docker-image "docker:latest" \
  --docker-volumes /var/run/docker.sock:/var/run/docker.sock \
  --docker-network-mode  "network-based-on-dirname-ew_default"

mmm, I might be able to hack up something, whereby I have another container that has the docker socket, and docker exec's

what's to bet there is a way ....

for example, I can add:

  gitlab-initializer:
    image: docker/compose:1.18.0
    restart: "no"
    volumes:
    - /var/run/docker.sock:/var/run/docker.sock
    - ./gitlab-compose.yml:/docker-compose.yml
    entrypoint: bash
    command: -c "sleep 200 && export GL_TOKEN=$(docker-compose -p sima-austral-deployment exec -T -u gitlab-psql gitlab sh -c 'psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production -t -A -c \"SELECT runners_registration_token FROM application_settings ORDER BY id DESC LIMIT 1\"') && docker-compose exec gitlab-runner gitlab-runner register -n --url http://gitlab:9090/ --registration-token ${GL_TOKEN} --executor docker --description \"Docker Runner\" --docker-image \"docker:latest\" --docker-volumes /var/run/docker.sock:/var/run/docker.sock --docker-network-mode  \"simaaustraldeployment_default\""

to my compose file - though I need some kind of loop/wait, as gitlab isn't ready straight away - sleep 200 might not be enough.

so - you can hack some kind of pattern like this directly in a docker-compose.yml - but personally, I'd much rather some cleaner support than this :)

dnephin commented 6 years ago

@SvenDowideit onrun already exists, it's entrypoint or cmd.

The entrypoint script for this image even provides a hook for you. $GITLAB_POST_RECONFIGURE_SCRIPT can be set to the path of a script that it will run after all the setup is complete (see /assets/wrapper in the image). Set the env variable to the path of your script that does the psql+register and you're all set.

Even if the image didn't provide this hook, it is something that can be added fairly easily by extending the image.

though I need some kind of loop/wait, as gitlab isn't ready straight away - sleep 200 might not be enough.

This would be necessary even with an "exec-after-start" option. Since the entrypoint script actually provides a hook I think it's probably not necessary with that solution.

SvenDowideit commented 6 years ago

nope, I (think) you've missed a part of the problem I'm showing:

in my case, I need access into both containers, not just one - so entrypoint / command does not give me this.

GL_TOKEN comes from the gitlab container, and is then used in the gitlab-runner container to register.

so the hack I'm doing, is using the docker/compose image to add a third container - this is not something you can modify one container's config/entrypoint/settings for, and is entirely a (trivial) example of a multi-container co-ordination that needs more.

I've been working on things to make them a little more magical - which basically means my initialisation container has some sleep loops, as it takes some time for gitlab to init itself.

TBH, I'm starting to feel that using a script, running in an init-container that uses the compose file itself and the docker/compose image, is the right way to hide this kind of complexity - for the non-production "try me out, and it'll just work" situations like this.

IF i were to consider some weird syntactical sugar to help, perhaps I'd go for something like:

gitlab-initializer:
    image: docker/compose:1.18.0
    restart: "no"
    volumes:
    - /var/run/docker.sock:/var/run/docker.sock
    - ./gitlab-compose.yml:/docker-compose.yml
    entrypoint: ['/bin/sh']
    command: ['/init-gitlab.sh']
    file:
      path: /init-gitlab.sh
      content: |
            for i in $(seq 1 10); do
                export GL_TOKEN=$(docker-compose -f gitlab-compose.yml -p sima-austral-deployment exec -T -u gitlab-psql gitlab sh -c 'psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production -t -A -c "SELECT runners_registration_token FROM application_settings ORDER BY id DESC LIMIT 1"')
                echo "$i: token($?) == $GL_TOKEN"
                ERR=$?

                if [[ "${#GL_TOKEN}" == "20" ]]; then
                    break
                fi
                sleep 10
            done
            echo "GOT IT: token($ERR) == $GL_TOKEN"

            for i in $(seq 1 10); do
                if  docker-compose -f gitlab-compose.yml  -p sima-austral-deployment exec -T gitlab-runner \
                    gitlab-runner register -n \
                    --url http://gitlab:9090/ \
                    --registration-token ${GL_TOKEN} \
                    --executor docker \
                    --description "Docker Runner" \
                    --docker-image "docker:latest" \
                    --docker-volumes '/var/run/docker.sock:/var/run/docker.sock' \
                    --docker-network-mode  "simaaustraldeployment_default" ; then
                        echo "YAY"
                        break
                fi
                sleep 10
            done

ie, like cloud-init: http://cloudinit.readthedocs.io/en/latest/topics/examples.html#writing-out-arbitrary-files

but when it comes down to it - we have a solution to co-ordinating complicated multi-container things from inside a docker-compose-yml.

dnephin commented 6 years ago

If you're able to set a predefined token, you could do it from an entrypoint script in gitlab-runner. Is there no way to set that head of time?

omeid commented 6 years ago

@dnephin The moment you mention script, you're off the mark by a light year and then some.

onrun is not the same as entrypoint or cmd.

The entrypoint/cmd is for configuring the executable that will run as the containers init/PID 1.

The idea mentioned in this and many related issue is about init scripts, which is different from init in the context of booting, and is about application init scripts, think database setup.

SvenDowideit commented 6 years ago

@dnephin it'd probably be more useful if you focused on the general problem-set, rather than trying to work around a specific container-set's issues.

From what I've seen though, no, its a generated secret - but in reality, this is not the only multi-container co-ordination requirement in even this small play system is likely to have - its just the fastest one for me to prototype in public.

daqSam commented 6 years ago

How is it possible that we have been able to override entrypoint and command in a compose file since v1 (https://docs.docker.com/compose/compose-file/compose-file-v1/#entrypoint) and still don't have a directive such as onrun to run a command when the containers are up?

SvenDowideit commented 6 years ago

TBH, I don't really think onrun is plausible - Docker, or the orchestrator doesn't know what "containers are all up" means - in one of my cases, the HEALTHCHECK will fail, until after I do some extra "stuff" where I get info from one container, and use it to kick of some other things in other containers.

And if I grok right, this means I'm basically needing an Operator container, which contains code that detects when some parts of the multi-container system is ready enough for it to do some of the job, (rinse and repeat), until its either completed its job and exits, or perhaps even monitors things and fixes them.

And this feels to me like a job that is best solved (in docker-compose) by a docker-compose container with code.

I'm probably going to play with how to then convert this operator into something that can deal with docker swarm stacks (due to other project needs).

I'm not entirely sure there is much syntactic sugar that could be added to docker-compose, unless its something like marking a container as "this is an operator, give it magic abilities".

bagermen commented 6 years ago

It's clearly seen that developers do not want listening to users.. I'll look at some other tool... docker-compose is a big pain.. I do not understand why you can't understand that the only useful thing that comes from docker-composer is a build tool... I spent a lot of time to searching HOW can I run SIMPLE command to add permissions inside of a container to active user..

It seems that docker-composer has simply NOT DONE state...

SvenDowideit commented 6 years ago

I too want something that will onrun in my compose file

BUT, neither containers, nor compose have a way to know what onrun means. This is why the operator pattern exists, and why I made the examples in https://github.com/docker/compose/issues/1809#issuecomment-362126930

it is possible to do this today - in essence, you add an onrun service that waits until whatever other services are actually ready to interact with (in gitlab's case, that takes quite a bit of time), and then do whatever you need to do to co-ordinate things.

If there is something that doesn't work with that, please tell us, and we'll see if we can figure out something!

yosefrow commented 6 years ago

I too want something that will onrun in my compose file

BUT, neither containers, nor compose have a way to know what onrun means.

As I see it, onrun per service, means when the first container process starts. In a larger number of cases, the container is only running one process anyway, as this is the recommended way of running containers.

The issue of cross-platform support was solved earlier, as the command can be completely OS agnostic through docker exec, in the same way that RUN does not have to mean a linux command in Dockerfile. https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/manage-windows-dockerfile

MFQ commented 6 years ago

Still waiting for onrun feature

wongjiahau commented 6 years ago

I need this onrun features too, I thought it was in this tool. Because of this lacking feature now I need to maintain 2 scripts man.

wongjiahau commented 6 years ago

Guys, what if I made a wrapper around this docker-compose and allow this onrun feature? Would you guys use it?

reduardo7 commented 6 years ago

@wongjiahau may be something like this? https://github.com/docker/compose/issues/1809#issuecomment-348497289

wongjiahau commented 6 years ago

@reduardo7 Yes, I thought of wrapping it inside a script called docker-composei, and with the docker-composei.yml which contain the onrun attribute.
Btw, docker-composei means docker-compose improved.

sscholle commented 6 years ago

The real solution is probably to build a 'Orchestrator' image that runs and manages (via bash scripts) the 'App Images' (possibly using docker) internally. Otherwise we will always be asking for more features for a tool that "isn't meant to do what we want it to do".

So we should even consider Docker within Docker...

usergoodvery commented 6 years ago

just to add my support for this proposed feature. onrun does make sense, but to broaden the potential utility and future proof it a bit, perhaps someone needs to look at a more broader 'onevent' architecture, one of which would be onrun.

Given the prevailing direction for containers to be self-contained, one-service per container, the container must be self sufficient in terms of its operating context awareness. What flows from that the compose file should be the medium for defining that, not bolt-on scripts. Hard to argue against that, unless you are some self-absorbed zealot.

In my case my redis containers load lua scripts after the the redis server has started. In normal non container environment I get systemd to run a post-startup script. Simple and consistent with systemd architecture. Similar principle should exist for compose given its role in setting up the context for the containers to run .

As a general advice to the maintainers, please focus on proven operating principles not personal preferences.

Skull0ne commented 6 years ago

so the solution (after reading all this thread) is to use a bash script to do the job... in that case i'll remove docker-compose (we can do everything with the docker cmd...)

thx dev to listen to people who are using your things :)

webpolis commented 6 years ago

By seeing the amount of messages containing arguments and counterarguments fighting simple propositions (such as having an onrun event) my first honest impression is that Github Issues has turned to be a place where owners (project developers) showcase their egos and smartness by means of using their knowledge and technical argon to oppose intelligent contribution from the users.

Please, let's make Open Source truly open.

v0lume commented 6 years ago

any updates on this feature? what is the problem?

dextermb commented 6 years ago

@v0lume I'm guessing you didn't bother to actually read the responses throughout this article

T-vK commented 5 years ago

There still doesn't seem to be a solution... I'd like to share a hacky workaround though. By specifying version "2.1" in the docker-compose.yml you can abuse the healthcheck test to run additional code inside the image when it is started. Here is an example:

version: '2.1'
services:
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:5.4.3
        healthcheck:
            test: |
                curl -X PUT elasticsearch:9200/scheduled_actions -H "ContentType: application/json" -d '{"settings":{"index":{"number_of_shards":'1',"number_of_replicas":'0'}}}' &&
                curl --silent --fail localhost:9200/_cat/health ||
                exit 1
            interval: 11s 
            timeout: 10s 
            retries: 3
        environment:
            - discovery.type=single-node
            - ES_JAVA_OPTS=-Xms1g -Xmx1g
            - xpack.security.enabled=false
    main:
        image: alpine
        depends_on:
            elasticsearch:
                condition: service_healthy

If the healthcheck-test script you write exits with code >=1 it might get executed multiple times. The healthcheck of a service will only be executed if another service depends on it and specifies the service_healthy condition as seen in the example.

Patrick-Ullrich commented 5 years ago

I like @T-vK approach and have used it successful before. But I'd like to share another ... hack:

# Run Docker container here

until echo | nc --send-only 127.0.0.1 <PORT_EXPOSED_BY_DOCKER>; do
  echo "Waiting for <YOUR_DOCKER> to start..."
  sleep 1
done

# Do your docker exec stuff here
web-ted commented 5 years ago

+1 I totally agree on this because the feature is needed and it is already implemented by other docker orchestrators like kubernetes. It already has lifecycle hooks for containers and is documented here.

But let me contribute a use case that you cannot resolve with Dockerfiles.

Let's say you need to mount a volume at runtime and create a symbolic link from your container to the volume without previously knowing the exact name of the directory. I had a case that the dir name was dynamic depending on the environment I was deploying on and I was passing it as a variable.

Sure I found a workaround to solve this and there is more than one. On the other hand hooks would give me the flexibility and a better approach to dynamically make changes without the urge to hack things and replace the Dockerfile.

sandor11 commented 5 years ago

I'm glad to have found this issue. I have been toying around with Docker and Docker compose for a couple of years. Now seriously was hoping to use it as a tool to start scaling a system. I will check back every year or two, but based on the attitude of the project maintainers, I will simply get by using either scripts, or some other tool. Glad to not have invested much time and found this one out early on.

Pro Tip: If someone who's just starting to move their workflow across to this type of tool is already in need of what's described here, might be worth re-thinking about 'why' your building this. Yes, you're successful, but it because people used the thing in the first place, and you were probably super open to giving them what they needed.

All the best.

fescobar commented 5 years ago

I'm able to give you whatever you want (except my girlfriend) if this feature is implemented it and I will be the person happiest in all universe :)

damusix commented 5 years ago

just to add my support for this proposed feature. onrun does make sense, but to broaden the potential utility and future proof it a bit, perhaps someone needs to look at a more broader 'onevent' architecture, one of which would be onrun.

That'd be nice.

To add to this, given the following:

services:
    web:
        image: node:8-alpine
        depends_on:
            - db
    db:
        image: postgres:alpine
        onrun: "echo hi"

would it be too much to add cross-event scrips?

    web:
        events:
            db_onrun: "connectAndMigrate.sh"
sshishov commented 5 years ago

In my opinion adding this to docker-compose is straightforward that not only you, who are using compose file and compose stack but also other developers in your team.

We need to install and configure mkcert, for instance, on every environment to have trusted certificates. It is not a part of container or Dockerfile as it is not needed on stage/production. What is the proper approach here to install the tool and everybody who is using compose file even have no clue what is going behind the scenes?

rm-rf-etc commented 5 years ago

Adding another use case:

Needed a wordpress instance. Wrote my docker-compose.yaml. docker-compose up – Oops! Need to set the file permissions of the plugins directory... Can't find any other way to make it work, gotta set the permissions after the container is running because I'm binding some files from the host and it seems the only way to fix the fs permissions is by doing chown -Rf www-data.www-data /var/www/wp-content from inside the container. Write my own Dockerfile and build, just for this? That seems stupid to me.

Fortunately for me, the healthcheck hack provided above allowed me to implement this. I see other pages on the web talking about the issue of settings permissions on docker volumes, but the suggested solutions didn't work.

Glad to see that these gatekeepers, @dnephin, @aanand, @shin-, are getting a ton of heat for this. It really speaks volumes when an entire community screams as loudly as possible, and the core developers just sit back, hold their ground, and refuse to listen. So typical too. Let us count not just the number of thumbs up, but also the 34 users who replied to say that this is needed: 01) sshishov 02) fescobar 03) sandor11 04) web-ted 05) v0lume 06) webpolis 07) Skull0ne 08) usergoodvery 09) wongjiahau 10) MFQ 11) yosefrow 12) bagermen 13) daqSam 14) omeid 15) dantebarba 16) willyyang 17) SharpEdgeMarshall 18) lost-carrier 19) ghost 20) rodrigorodriguescosta 21) datatypevoid 22) dextermb 23) lekhnath 24) lucile-sticky 25) rav84 26) dopry 27) ahmet2mir 28) montera82 29) discordianfish 30) jasonrhaas 31) fferraris 32) hypergig 33) sunsided 34) sthulb

And the number who said no? A whopping 3: 01) dnephin 02) aanand 03) shin-

Hmmm... 34 to 3...

dopry commented 5 years ago

@rm-rf-etc good analytics... I don't even think @dnephin or @aanand are working on docker-compose anymore. With luck, Docker is planning to deprecate compose in favor of stacks and there won't be a team left here to complain about and we'll start seeing forward progress on the product again.

shin- commented 5 years ago

Adding another use case:

Needed a wordpress instance. Wrote my docker-compose.yaml. docker-compose up – Oops! Need to set the file permissions of the plugins directory... Can't find any other way to make it work, gotta set the permissions after the container is running because I'm binding some files from the host and it seems the only way to fix the fs permissions is by doing chown -Rf www-data.www-data /var/www/wp-content from inside the container.

In this case, you could also set the user property in your Compose file

Write my own Dockerfile and build, just for this? That seems stupid to me.

Seems like you've formed a strong opinion ; but realistically, there'd be nothing "stupid" about writing a Dockerfile to modify a base image to fit your needs. That's the original intent of all base images.

Fortunately for me, the healthcheck hack provided above allowed me to implement this. I see other pages on the web talking about the issue of settings permissions on docker volumes, but the suggested solutions didn't work.

Glad to see that these gatekeepers, @dnephin, @aanand, @shin-, are getting a ton of heat for this.

Yeah, good attitude mate. :D


@rm-rf-etc good analytics... I don't even think @dnephin or @aanand are working on docker-compose anymore.

Yeah, it's been a few years now - no need to keep pinging them on old issues.

With luck, Docker is planning to deprecate compose in favor of stacks and there won't be a team left here to complain about and we'll start seeing forward progress on the product again.

🙄

dextermb commented 5 years ago

@shin- but you just pinged it with that response

T-vK commented 5 years ago

I recently ran into this issue again and even though it can be done as seen in my workaround, this only works if you specify 2.1, which stinks imo.

It's just mind-boggling to me that the official stance seems to be that you should create your own docker images for everything.
To me this is literally like saying "If you want to change a setting in any program, you have to modify the source code and recompile it.".
Every time you add a new service or you want to upgrade to a newer version of .. for example the MongoDB or MySQL Docker image, you'd have to make a new Dockerfile, build it and potentially push it into your registry. This is a massive waste of time and resources compared to how it would be if you could just change image: mongo:3.0.2 to image: mongo:3.0.3 in your docker-compose.yml.
I'm not ranting about long build times, I'm ranting about the fact that you have to bother with Dockerfiles and docker build when all you want is to update or change a parameter of a service that is potentially not even meant to be used as a base image.

And the argument that every application should do one thing and one thing only, really stinks too. This is not even about implementing a completely new feature this is just about passing another parameter through to docker. It also begs the question why docker run, docker build, docker exec, docker pull etc. are all part of the same application. The argument sound kind of hypocritical now, doesn't it?

rm-rf-etc commented 5 years ago

@shin-, I followed your link and I don't see how the user property is relevant to setting the owner of a bind mounted directory. Seems to be related to ports.

Re: attitude: Looks like people agree with me, so take it as strong feedback. Sorry if you don't like how I'm expressing this, but it just really seems that the user demands are being ignored, so what else do you expect?

McMatty commented 5 years ago

I came here hoping for the functionality such as the onrun: being suggested as I am only two days into using compose and to me a tool like this should have this functionality.

Going back to my docker files to update each with a separate script for the features seems redundant. I merely want to inject a token from a another container into an environment variable where my dockerfile was flexible before is now tightly coupled to the docker-composer.yml and solution for a simple purpose.

fabiomolinar commented 5 years ago

Damn, I read the entire thread hopping to find the answer "ok guys, we finally realized that this is cool and we will implement it". Sad to see this didn't move forward. +1 to onrun!

dopry commented 5 years ago

@fabiomolinar, There is one sort of solution, that we use extensively in our production swarms, but it's not quite as nice as having an event.

We use the following anchor

#### configure a service to run only a single instance until success
x-task: &task
  # for docker stack (not supported by compose)
  deploy:
    restart_policy:
      condition: on-failure
    replicas: 1
  # for compose (not supported by stack)
  restart: on-failure

to repeat tasks until they're successful. We create containers for migrations and setup tasks that have idempotent results and run them like this in our local compose and in our stacks.

The service which depends on the task needs to fail somewhat gracefully if the configuration work isn't complete. In most cases as long as you're okay with a few errors banging out to end users, this gives you an eventual consistency that will work well in most environments.

It also assumes your service containers can work with both pre and post task completion states. In use-cases like database migrations, dependent services should be able to work with both pre-and post-migration schemas.. obviously some thought must be put into development and deployment coordination, but that is a general fact of life for anyone who is doing rolling updates of services.

dopry commented 5 years ago

@fabiomolinar, here is an example of how we use this approach in our compose services...

#### configure a service to run only a single instance until success
x-task: &task
  # for docker stack (not supported by compose)
  deploy:
    restart_policy:
      condition: on-failure
    replicas: 1
  # for compose (not supported by stack)
  restart: on-failure

#### configure a service to always restart
x-service: &service
  # for docker stack (not supported by compose)
  deploy:
    restart_policy:
      condition: any
  # for compose (not supported by stack)
  restart: always

services: 
  accounts: &accounts
    <<: *service
    image: internal/django
    ports:
      - "9000"
    networks:
      - service
    environment:
      DATABASE_URL: "postgres://postgres-master:5432/accounts"
      REDIS_URL: "hiredis://redis:6379/"

  accounts-migrate:
    <<: *accounts
    <<: *task
    command: ./manage.py migrate --noinput
fabiomolinar commented 5 years ago

Thanks for pointing that out @dopry. But my case was somewhat simpler. I needed to get my server running and then, only after it's up and running, I needed to do some deployment tasks. Today I found a way to do that by doing some small process management within one single CMD line. Imagine that the server and deploy processes are called server and deploy, respectively. I then used:

CMD set -m; server $ deploy && fg server

The line above sets bashes' monitor mode on, then it starts the server process on the background, then it run the deploy process and finally it brings the server process to the foreground again to avoid having Docker killing the container.

jeliasson commented 5 years ago

While we discuss this, anyone have any tip on how to run a command on container or the host upon running docker-compose up?

I understand that running any command on the host would compromise the layers of security, but I just would like to rm a directory prior or during startup of a container. Directory is accessible on both host and the container. I don't want to make a custom Docker image or have a script that first rm and then run docker-compose.

Thanks!

dopry commented 5 years ago

@fabiomolinar, The approach your propose violates a few 12 factor app principals. If you're containerizing your infrastructure, I'd strongly recommend adhering closely to them.

Some problems that could arise from your approach

  1. slow container start-up.
  2. when scaling a service with the container, deploy will run once for every instance, potentially leading to some interesting concurrency problems.
  3. harder to sort logs from the 'task' and service for management and debugging.

I did find the approach I am recommending counter-intuitive at first. It has worked well in practice in our local development environments under docker-compose, docker swarms, and mesos/marathon clusters. It's also effectively worked around the lack of 'onrun'.

fabiomolinar commented 5 years ago

The approach I have used is indeed very ugly. I used it for a while just to get my dev environment running. But I have changed that already to use entrypoint scripts and the at command to run scripts after the server is up and running. Now my container is running with the correct process as the PID 1 and responding to all signals properly.

victor-perov commented 5 years ago

We still need this. I can't find a way, how I could execute my database rollups after successfully started container without making it in a bunch of Makefiles.