Closed ahmet2mir closed 9 years ago
I think these should be steps in the Dockerfile
FROM mongo:3.0.2
ADD data/mongodb/dumps/latest /data/mongodb/dumps/latest
RUN mongorestore -d database /data/mongodb/dumps/latest/database
That way you also get it cached when you rebuild.
Thanks @dnephin. Of course I can make a Dockerfile and use it in build instead of images, or I can use docker exec. MongoDB is just an example, you can have this example with mysql and account creation, or with rabbitmq and queue creation etc.
onrun
will permits flexibility on compose orchestration, compose will read onrun list and make docker exec
on each item.
The point is that putting commands to docker exec
in docker-compose.yml
is unnecessary when you can either do it in the Dockerfile or in the container's startup script, both of which will also make your container more useful when not being run with Compose.
Alternatively, start your app with a shell script or Makefile that runs the appropriate docker
and docker-compose
commands.
The functionality isn't worth adding to Compose unless it would add significant value over doing either of those, and I don't think it would for the use cases you've cited.
So, to manage my docker, you suggest me to use a Script or a Makefile. So why compose was created ? We can manage, scale etc. container with script || dockerfile ?
Ok, I take this example, it's what I used to deploy my application testing environment in the CI process.
rabbitmq:
image: rabbitmq:3.5.1-management
environment:
RABBITMQ_NODENAME: rabbit
hostname: rabbitmq
domainname: domain.lan
volumes:
- /data/rabbitmq/db:/var/lib/rabbitmq
ports:
- "5672:5672"
- "15672:15672"
- "25672:25672"
- "4369:4369"
mongodb:
image: mongo:3.0.2
hostname: mongo
domainname: domain.lan
volumes:
- /data/mongodb:/data
ports:
- "27017:27017"
appmaster:
image: appmaster
hostname: master
domainname: domain.lan
environment:
...
ports:
- "80:80"
- "8080:8080"
links:
- mongodb
- rabbitmq
celery:
image: celery
hostname: celery
domainname: domain.lan
environment:
...
links:
- rabbitmq
After container starts, I must provision mongodb, manage queue and account in rabbitmq
What i'm doing today is a script with:
#!/bin/bash
PROJECT=appmaster
docker-compose -f appmaster.yml -p appmaster up -d
docker exec appmaster_rabbitmq_1 rabbitmqctl add_user user password
docker exec appmaster_rabbitmq_1 rabbitmqctl add_vhost rabbitmq.domain.lan
docker exec appmaster_rabbitmq_1 rabbitmqctl set_permissions -p rabbitmq.domain.lan password ".*" ".*" ".*"
docker exec appmaster_mongodb_1 mongodump --host mongo-prd.domain.lan --port 27017 --out /data/mongodb/dumps/latest
docker exec appmaster_mongodb_1 mongorestore -d database /data/mongodb/dumps/latest/database
With onrun
instruction I can directly make docker-compose -f appmaster.yml -p appmaster up -d
and the yml file become more readable
rabbitmq:
...
onrun:
- rabbitmqctl add_user user password
- rabbitmqctl add_vhost rabbitmq.domain.lan
- rabbitmqctl set_permissions -p rabbitmq.domain.lan password ".*" ".*" ".*"
mongodb:
...
onrun:
- mongodump --host mongo-prd.domain.lan --port 27017 --out /data/mongodb/dumps/latest
- mongorestore -d database /data/mongodb/dumps/latest/database
This would be rather useful and solves a use case.
:+1:
It will make using docker-compose
more viable for gated tests as part of a CD pipeline
:+1:
This is a duplicate of #877, #1341, #468 (and a few others).
I think the right way to support this is #1510 and allow external tools to perform operations when you hit the event you want.
Closing as a duplicate
This would be very useful. I don't understand the argument of "oh you could do this with a bash script". Of course we could do it with a bash script. I could also do everything that Docker-compose does with a bash script. But the point is that there is one single YAML file that controls your test environment and it can be spun up with a simple docker-compose up
command.
It is not the remit of Compose to do everything that could be done with a shell script or Makefile - we have to draw a line somewhere to strike a balance between usefulness and avoiding bloat.
Furthermore, one important property of the Compose file is that it's pretty portable across machines - even Mac, Linux and Windows machines. If we enable people to put arbitrary shell commands in the Compose file, they're going to get a lot less portable.
@aanand To be fair, being able to execute a docker exec
does not automatically imply x-plat incompatibility.
Apologies - I misread this issue as being about executing commands on the host machine. Still, my first point stands.
I understand your point @aanand. It doesn't seem out of scope to me, since already docker-compose
does a lot of the same things that the regular docker
engine already does, like command
, expose
, ports
, build
, etc. Adding the exec
functionality would add more power to docker-compose
to make it a true one stop shop for setting up dev environments.
@aanand the main problem for many devs and CI pipelines is to have a data very close to the production env. Like a dump from a DB. I create this ticket 1 year ago and nothing move in docker compose.
So you suggest a Makefile or a Bashcript just to run some exec https://github.com/docker/compose/issues/1809#issuecomment-128073224
What I originally suggest is onrun
(or oncreate) who keep idempotency. Just run at the first start. If the container is stopped or paused, the new start will not run onrun (or oncreate)
Finally, in my git repository I will have a compose file, a dockerfile and a makefile with idempotency management (may makefile could create a statefile). Genius!
There's a big difference between command
, expose
, etc and exec
. The first group are container options, exec
is a command/api endpoint. It's a separate function, not options to the create container function.
There are already a couple ways to accomplish this with Compose (https://github.com/docker/compose/issues/1809#issuecomment-128059030). onrun
already exists. It is command
.
Regarding the specific problem of dumping or loading data from a database, those are more "workflow" or "build automation" type tasks, that are generally done in a Makefile. I've been prototyping a tool for exactly those use-cases called dobi, which runs all tasks in containers. It also integrates very well with Compose. You might be interested in trying it out if you aren't happy with Makefiles. I'm working on an example of a database init/load use case.
@dnephin onrun
is not a simple command
because you just miss the idempotency.
Let's imagine. create
on container creation and will never be exec again (dump & restore).
exec:
create:
- echo baby
destroy:
- echo keny
start:
- echo start
stop:
- echo bye
If you need more examples:
Thanks for dobi, but if you need to create a tool to enhance compose, compose is bad and it's better to use a more powerfull tool.
but if you need to create a tool to enhance compose, compose is bad and it's better to use a more powerful tool.
That's like saying "if you need applications to enhance your operating system, your OS is bad". No one tool should do everything. The unix philosophy is do one thing, and do it well. That is what we're doing here. Compose does its one thing "orchestrate containers for running an application". It is not a build automation tool.
That's like saying "if you need applications to enhance your operating system, your OS is bad". No one tool should do everything. The unix philosophy is do one thing, and do it well. That is what we're doing here.
Wow I think that we reached the best bad faith.
Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. The standard container manifesto was removed. We should stop talking about Docker containers, and start talking about the Docker Platform. It is not becoming the simple composable building block we had envisioned.
So you can guarantee that we will never see "docker compose" wrote in Go inside in the docker monolithic binary to keep the unix philosophy ? https://www.orchardup.com/blog/orchard-is-joining-docker
To continue towards that original goal, we’re joining Docker. Among other things, we’re going to keep working on making Docker the best development experience you’ve ever seen – both with Fig, and by incorporating the best parts of Fig into Docker itself.
So in short there is no way to do things like loading fixtures with compose..? I have to say I'm surprised.. The official way is to add fixture loading to my production container? Or to write a shell script around my compose file? In the later case I could also just execute 'docker run' as I did before.
@discordianfish, If, somehow, someone would wake up to the fact that CI/CD engineers need to be able to handle life cycle events and orchestration at least at a very basic level, then who knows docker/docker-compose may actually make its way out of local development pipelines and testing infrastructure and find a place in more production environments. I'm hopeful whoever is working on the stacks will address these issues, but I won't hold my breath.
After all what needs to be done at build time may be different than what is needed at runtime, and is needed at runtime often varies by deployment environment...
It is kind of annoying work to make my external scripts aware of whether an up is going to create or start containers...
And those are things some lifecycle hooks + commands + environment variables could help with.
You see it in service management frameworks and other orchestration tools... why not in docker-compose?
You might be interested in https://github.com/dnephin/dobi , which is a tool I've been working on that was designed for those workflows.
@dnephin stop spamming this issue with your tools. We see your comment before and the answer is the same. Makefile/bash is probably better than an nth "my tool enhance docker".
Thank you for your constructive comment. I didn't realize that I had already mentioned dobi on this thread 8 months ago.
If you're happy with Makefile/bash that's great! I'm glad your problem has been solved.
Added a comment related to this topic here: https://github.com/docker/compose/issues/1341#issuecomment-295300246
@dnephin for this one, my comment can be applied:
So sad that this issue have been closed because of some refractoriness to evolution :disappointed:
The greatest value of having docker compose is standardization
That's the point. If we could "just" write a .sh file or whatever to do the job without using Docker Compose, why Docker Compose is existing? :confused:
We can understand that is a big job, as @shin- said:
it's unfortunately too much of a burden to support at that stage of the project
:heart:
But you can't say "Make a script" what means "Hey, that's too hard, we're not gonna make it".
If it's hard to do it, just say "Your idea is interesting, and it fills some needs, but it's really difficult to do and we don't have resources to do it at this time... Maybe could you develop it and ask a pull request" or something like that :bulb:
In #1341, I "only" see a way to write in docker-compose.yml
commands like nmp install
that would be run before or after some events (like container creation), like you would do with docker exec <container id> npm install
for example.
I have a custom NodeJS image and I want to run npm install
in the container created from it, with a docker-compose up --build
.
My problem is: the application code is no added in the container, it's mounted in it with a volume, defined in docker-compose.yml
:
custom-node:
build: ../my_app-node/
tty: true
#command: bash -c "npm install && node"
volumes:
- /var/www/my_app:/usr/share/nginx/html/my_app
so I can't run npm install
in the Dockerfile because it needs the application code to check dependencies. I described the behavior here: http://stackoverflow.com/questions/43498098/what-is-the-order-of-events-in-docker-compose
To run npm install
, I have to use a workaround, the command
statement:
command: bash -c "npm install && node"
which is not really clean :disappointed: and which I can't run on Alpine versions (they don't have Bash installed in it).
I thought that Docker Compose would provide a way to run exec commands on containers, e.G.:
custom-node:
build: ../my_app-node/
tty: true
command: node
volumes:
- /var/www/my_app:/usr/share/nginx/html/my_app
exec:
- npm install
But it's not, and I think it's really missing!
I expected compose is designed for testing, but I'm probably wrong and it's intended more for local development etc. I ran into several other rough edges like orphaned containers and the unclear relation between project name, path and how it's used to identify ownership, what happens if you have multiple compose files in the same directory etc etc. So all in all it doesn't seem like a good fit for CI. Instead I'm planning to reuse my production k8s manifests in CI by running kubelet standalone. This will also require lots of glue, but at least this way I can use the same declarations for dev, test and prod.
@lucile-sticky you can use sh -c
in alpine.
It sounds like what you want is "build automation" which is not the role of docker-compose. Have you looked at dobi ?
Two questions:
This feature is highly needed!
@lucile-sticky
Why is this not the role of Docker Compose?
Because the role of Compose is clearly defined and does not include those functions.
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application's services. Then, using a single command, you create and start all the services from your configuration
If the point is to have only one tool to rule them all, why would I use an other tool to complete a task that Docker Compose is not able to do?
We don't want to be the one tool to rule them all. We follow UNIX philosophy and believe in "mak[ing] each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features." It's okay to disagree with that philosophy, but that's how we at Docker develop software.
I create this issue, in august 2015, each year someone add a comment and we are looping on the same questions with the same answers (and for sure you will see @dnephin making an Ad for his tool).
@shin-
You can't separate "build" and "provision" in orchestration tools.
For example, may you know one of them:
When you configure a service you have to provision it. If I deploy a tomcat, I have to provision it with a war, if I create a DB, I have to inject data etc. no matter how the container must be start (let the image maintainer manage it). The main purpose of a "provisionner" in Compose case is to avoid misunderstanding between "what start my container" and "what provision it".
Like said your quote in the compose doc "With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration"
Unix philosophy ? Let me laugh. I point you to the same answer I did in this issue https://github.com/docker/compose/issues/1809#issuecomment-237195021 . Let see how "moby" will evolve in the Unix philosophy.
@shin- docker-compose doesn't adhere to the Unix Philosophy by any stretch of the imagination. If docker-compose adhered to the Unix Philosophy there would be discrete commands for each of build, up, rm, start, stop, etc and they would each have a usable stdin, stdout, and stderr that behaved consistently. says the unix sysadmin with over 20 years of experience including System V, HP-UX, AIX, Solaris, and Linux
Let's go back to the overview for compose
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application's services. Then, using a single command, you create and start all the services from your configuration.
Ultimately, docker-compose is an orchestration tool for managing a group of services based on containers created from docker images. It's primary functions are to 'create', 'start', 'stop', 'scale', and 'remove' services defined in a docker-compose.yml file.
Many services require additional commands to be ran during each of these life cycle transitions. scaling database clusters often requires joining or removing members from a cluster. scaling web applications often requires notifying a load balancer that you have added or removed a member. some paranoid sysadmins like forcibly flush their database logs and create checkpoints when shutting down their databases.
Taking action on state transition is necessary for most orchestration tools. You'll find it in AWS's tools, Google's tools, foreman, chef, etc.. most of the things that live in this orchestration space have some sort of lifecycle hook.
I think this is firmly in the purview of docker-compose given it is an orchestration tool and it is aware of the state changes. I don't feel events or external scripts fit the use case. They're not idempotent, is much harder to launch a 'second' service next to compose to follow the events. Whether the hooks run inside the container or outside the container is an implementation detail.
At the end of the day there is a real need that is being expressed by users of docker-compose and @aanand , @dnephin, @shin- seem to be dismissing it. It would be nice to see this included on a roadmap.
This type of functionality is currently blocking my adoption of docker in my testing and production production deployments. I would really like to see this get addressed in some fashion rather than dismissed.
I think this will be very useful!
For me the problem is that when there is a app container A running service 'a' dependent on db container B running service b. Then A container fails unless its b is setup. I would prefer to use docker hub images instead of re-writing my own Dockerfiles. But this means A fails and no container is created. Only option otherwise is to
I've excactly the same use case as @lucile-sticky .
@lekhnath for my case, I solved it by editing the command
option in my docker-compose.yml
:
command: bash -c "npm install && node"
But it's soooo ugly T-T
@lucile-sticky It should be noted that this overrules any command set in the Dockerfile
of the container, though. I worked around this by mounting a custom shell script using volumes
, making the command
in my Docker Compose file run that script, and including in it the original CMD
from the Dockerfile
.
Why is this issue closed? write a bash script or use this tool I wrote is not a valid reason to close this issue.
This is a very helpful and important feature that is required in a lot of uses case where compose is used.
@dnephin Do you think running init scripts is outside the scope of container based application deployments? after all, compose is about "define and run multi-container applications with Docker".
Have somebody looked at dobi if you haven't please do so here its :)
Guessing nothing ever happened with this. I'd love to see some sort of functionality within the docker-compose
file where we could write out when a command should be executed such as the example @ahmet2mir gave.
Very sad to see this feature not being implemented.
Implement this feature please, I need to automatically install files after docker-compose up, as the folders where the file must be copied are created after initialization of the containers. Thanks
It is incredible that there is no this feature implemented yet!
This is very poor form @dnephin. You have inhibited the implementation of such a highly sought after feature, and you're not even willing to continue the conversation.
I am sorry, I couldn't think of a more milder language to put it, lack of this feature has added fraction to our workflow, like many many other developer and teams, and you have been a hindrance to solving this problem..
Oh, let's make it the unix-way
then.
Just (multiplex then) pipe docker-compose up
stdin to each containers' CMD
?
So that such a yaml file
services:
node:
command: sh -
would make this work: cat provision.sh | docker-compose up
containers are for executing things, I don't see better use of stdin than passing commands along.
An alternative could be:
services:
node:
localscript: provision.sh
Although a bit shell-centric that would solve 99% of provisioning use-cases.
Even though there are valid use cases, and plenty of upvotes on this... it's still apparently been denied. Shame as I, like many others here, would find this extremely useful.
Adding my +1 to the large stack of existing +'s
...another +1 here!
I think that if there is such a request for this feature it should be implemented, tools are here to help us reach our objectives and we should mould them to help us not to make our life harder. I understand the philosophy to which someone adhere but adding some kind of "hooks commands" should not be a problem.
+1 +1
While I wait for this feature, I use the following script to perform a similar task:
docker-start.sh
#!/usr/bin/env bash
set -e
set -x
docker-compose up -d
sleep 5
# #Fix1: Fix "iptable service restart" error
echo 'Fix "iptable service restart" error'
echo 'https://github.com/moby/moby/issues/16137#issuecomment-160505686'
for container_id in $(docker ps --filter='ancestor=reduardo7/my-image' -q)
do
docker exec $container_id sh -c 'iptables-save > /etc/sysconfig/iptables'
done
# End #Fix1
echo Done
Hi,
It will be very helpful to have something like "onrun" in the YAML to be able to run commands after the run. Similar to https://github.com/docker/docker/issues/8860
After the mongodb start, It will dump db2dump.domain.lan and restore it.
When I will stop and then start the container, onrun part will no be executed to preserve idempotency.
EDIT 15 June 2020
5 years later, Compose wan't to "standardize" specifications, please check https://github.com/compose-spec/compose-spec/issues/84