Closed ghost closed 3 years ago
What "anti-pattern" argument? There is no argument made. It's just a "no, because anti-pattern" without any logic behind it, just saying it without anything backing it up. It's like the people saying it thought of the worst case scenario on their head, decided that scenario was an anti-pattern and then dismissed everything as such without even writing about their so called anti-pattern scenario.
It's just elitism. Many comments here have been over how ridiculous the reasoning for not adding this is and they are all ignored.
Common sense and logic doesn't care about your feelings or elitism. Or your made up anti-patterns.
Yeah, @robclancy, please keep it civil FFS. I want this feature, but if all you're gonna do is talk shit at the maintainers, go vent on reddit please. @funkyfuture 's earlier correction is completely warranted.
in the end it's @shin-'s past efforts with docker-py that would allow you to implement an alternative to docker-compose.
I obviously don't want a fork of docker-compose, if that's what you're suggesting, especially for such a minute enhancement. That's the only other way this is going to happen, and that would be bad for the community.
If someone submitted a PR, would it actually be considered? Or is this something the docker-compose team has just firmly decided they won't accept? Would something along the lines of adding a config section that's compatible with docker stack configs be something you will consider?
This has gone off the rails... 'anti-pattern' without explanation turns 'anti-pattern' into a very broad definition that is impossible to argue against. There is also no clear direction on which side the 'anti-pattern' sits on; docker or docker-compose.
A clear definition of the anti-pattern responses would be fantastic and much appreciated.
The community is going to continue to grow so an established set of definitions needs to exist.
I want to use it to copy artifacts generated by a jenkins pipeline running on a docker compose stack. And then, the container name can be random, so I can't use docker cp
.
Today I must use
docker cp $(docker-compose -f docker-compose.development.ci.yml ps -q test):/app/tests_output ./tests_output
Is this different from
volumes: - ./folder_on_host/ :/folder_in_container/
? I am able to copy files from host to container (equivalent of COPY) this way in my compose file
I am trying to do same. I have a folder with a csv file and I would like to supply it to logstash. how can I do that. or which folder in container? at the moment I have something this: ./path/to/storage:/usr/share/logstash/data:ro
Any suggestions would be helpful
@shin- This ticket is now 1.5 years old. When 160 people tell you you're wrong - you probably are.
What else do you need to convince you that this should be implemented?
@isapir, the companies that don't listen to their customers, tend to go out of the business rather soon. So I guess we should see some production-ready docker alternatives in the near future.
@shin- This ticket is now 1.5 years old. When 160 people tell you you're wrong - you probably are.
😆 🤣 💯 🥇 😲 😮
I'm not a maintainer anymore. Please stop @-ing me on things I no longer have any control over.
@sfuerte There is a little project named Kubernetes that has already replaced Docker-Compose. I wonder if that would have happened had the attitude towards user feedback been more positive.
We need a buzzword to counter their buzzwords. It's all they can deal with.
This feature would totally be pro-pattern
. That should do it. The difference is that even though I made that stupid thing up there is many comments in this issue showing the advantages of this in ways that are clearly common use cases. And there isn't a single instance of an anti-pattern
.
@shin- you get tagged in this because you started this bullshit antipattern crap with no basis in reality. So stop crying about something that you caused.
k have fun
My case is:
I think the easiest way to solve this is to have 1 compose file for dev and 1 compose file for production.
The problem here is that i can specify "volumes" on the docker file, but i can't specify "copy" on the docker file?
Is anybody in the same case as me? Am i missing something?
@shin- is this an anti-pattern? how would you go about solving this issue?
@hems, in a perfect world, you want your application to be deployed as a standalone docker image. So if you're writing an application, the source code that you intend to deploy should probably be part of the Dockerfile
, so the image contains your entire application. So in the Dockerfile
, if you wanted your source in /var/www you would put
COPY my-app-src /var/www
Your source isn't environment specific so it just belongs in the docker image. Easy.
Most of us want to include an environment specific config file into the containers that makes an existing image work well with a particular docker-compose configuration. And we want to be able to do this without making a volume for a small file, or rolling a new image.
Can someone from the docker-compose team please just take a serious, impartial look at this and draw a final verdict (hopefully one that ignores all the immature people)? This issue's been open forever. The result is important, but personally I'm tired of getting notifications.
COPY my-app-src /var/www
that's what I'm saying, in developing I want to use my docker-compose file to mount VOLUMES into the images and during production build i want to COPY files into the images, hence why i think we should be able to COPY and mount VOLUMES using the docker-compose file, so i can have 1 compose file for dev and 1 for production build.
I work on the team that maintains Compose and am happy to jump into this discussion. To start I'll outline how we see the responsibilities of Dockerfiles and Compose files.
Dockerfiles are the recipe for building images and should add all the binaries/other files you need to make your service work. There are a couple of exceptions to this: secrets (i.e.: credentials), configs (i.e.: configuration files), and application state data (e.g.: your database data). Note that secrets and configs are read only.
Compose files are used to describe how a set of services are deployed and interact. The Compose format is used not only for a single engine (i.e.: docker-compose
) but also for orchestrated environments like Swarm and Kubernetes. The goal of the Compose format is to make it easy to write an application and test it locally, then deploy it to an orchestrated environment with little or no changes. This goal limits what we can change in the format because of fundamental differences like how each environemtn handles volumes and data storage.
Cutting up the responsibilities of the Dockerfile and Compose file like this gives us a good separation of concerns: What's in each container image (Dockerfile), how the services are deployed and interact (Compose file).
I'll now run through each of the exceptions to what you store in an image. For secrets, you do not want these baked into images as they could be stolen and because they may change over time. Docker Secrets are used to solve for this. These work slightly differently depending on which environment you deploy to but essentially the idea is that you can store credentials in a file that will be mounted read only to a tmpfs directory in the container at runtime. Note that this directory will always be /run/secrets/
and the file will be the name of the secret. Secrets are supported on Swarm, engine only (docker-compose
), and Kubernetes.
For configuration files or bootstrapping data, there is Docker Configs. These work similarly to secrets but can be mounted anywhere. These are supported by Swarm, and Kubernetes, but not by docker-compose
. I believe that we should add support for these and it would help with some of the use cases listed in this issue.
Finally there is application state data which needs to be stored externally. I won't dive into this as it's not related to this issue.
With that framing, I can answer a couple of questions:
copy
field to the Compose format? No, I don't think we will as it doesn't make sense in orchestrated environments.configs
support to docker-compose
? Yes, I think that we should.docker-compose cp
? Maybe, I'm not sure about this yet. It would essentially be an alias for a docker container cp
.Given that, there are a couple of tools that can be used here:
I think those tools solve all the problems raised in this thread.
This thread is quite heated. Please remember that there is a real live person behind each GitHub handle and that they're probably trying to do their best (even if their frustration is showing). We're all passionate about Compose and want the project to continue thriving.
Will we add a
docker-compose cp
? Maybe, I'm not sure about this yet.
i'd find that a helpful convenience like docker-compose exec
.
@chris-crone Amazing response, thank you!
I know I don't speak for everyone, but I get the impression that configs
support satisfies the vast majority of the interest in here. Shall an issue be opened for this?
And thanks for offering some alternative approaches. I didn't know about multi-stage builds until now.
I get the impression that
configs
support satisfies the vast majority of the interest in here.
i doubt this as i suspect that the majority here is not using Swarm and afaik the config
functionality requires that.
Yes, currently Swarm is required, but from @chris-crone's comment ...
These are supported by Swarm, and Kubernetes, but not by docker-compose. I believe that we should add support for these and it would help with some of the use cases listed in this issue.
... I'm reading that this can be implemented in docker-compose (sans Swarm)
The goal of the Compose format is to make it easy to write an application and test it locally, then deploy it to an orchestrated environment with little or no changes.
In complex apps we may have quite a few configuration files that need tweaking on-the-fly. Right now the most efficient (time & cost wise) way of doing that is to fill up the volumes key (because no sane person is going to create a different image while testing multiple configurations.. unless they have a boss that just loves spending money on dev hours).
Swarm and config are not really going to answer several of the use cases listed. "Separation of concern" is also not applicable as compose already does what you can do in docker, but simplifies it. A wrapper isn't separation... we're just asking you to extend it a bit more...
https://github.com/docker/compose/issues/6643
Get hacky with it.. extend volume functionality where every file under the new key is dynamically linked to a singular volume and mapped to their respective internal paths...
I think there are two scenarios here that are perfectly valid, one is about development environments. People create flexible environments with source code mounted into their images. The source code evolves as the development occurs and you cannot rebuild the image constantly or you just waste enormous amounts of time. Thats my scenario exactly and i can see that this scenario applies to a lot of other people.
The second one is about production images where you bake your source code (in case you are working with non-compiled scripts) into your image (and then again, i wasn't, i was still mounting it on my side) or you just compile your application and copy it into the final image. At that point, the application becomes extremely portable.
I think everyone understands that! The question is, do the docker-compose dev took the time to read out the cases and understand the needs? There are no anti-patterns here in theory, just devs that have a need and would like to be respected.
We love docker, docker-compose and all the ecosystem, we use it because we love it and because we use it, you have jobs (at least some of you are paid for it i hope).
Something i learned in the last years that i like to bring back here and there is the following and it applies very well to this scenario:
It's not what your software does that matters, it's what your users do with it that matters
Cheers and happy continuity! ᐧ
On Thu, 6 Jun 2019 at 10:55, jadon1979 notifications@github.com wrote:
The goal of the Compose format is to make it easy to write an application and test it locally, then deploy it to an orchestrated environment with little or no changes.
In complex apps we may have quite a few configuration files that need tweaking on-the-fly. Right now the most efficient (time & cost wise) way of doing that is to fill up the volumes key (because no sane person is going to create a different image while testing multiple configurations.. unless they have a boss that just loves spending money on dev hours).
Swarm and config are not really going to answer several of the use cases listed. "Separation of concern" is also not applicable as compose already does what you can do in docker, but simplifies it. A wrapper isn't separation... we're just asking you to extend it a bit more...
6643 https://github.com/docker/compose/issues/6643
Get hacky with it.. extend volume functionality where every file under the new key is dynamically linked to a singular volume and mapped to their respective internal paths...
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/docker/compose/issues/5523?email_source=notifications&email_token=ABBR3OMQH62242SM4QN5Y7TPZEQP7A5CNFSM4EKAVONKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXDDP4Q#issuecomment-499529714, or mute the thread https://github.com/notifications/unsubscribe-auth/ABBR3OMOZFZ47L6ITHPF2TDPZEQP7ANCNFSM4EKAVONA .
I want to spin up a docker Tomcat environment to run my app from a .war which is not named ROOT.war
. To do this, I have to copy it to Tomcat's webapps
dir and rename it to ROOT so that it will run on the currently bound ports 8005/9. Anything else fails due to rebinding issues on the ports with errors about 'illegal access'. These are ephemeral test builds so it can't go in the Dockerfile. This is why I want it in docker-compose
@washtubs
I know I don't speak for everyone, but I get the impression that configs support satisfies the vast majority of the interest in here. Shall an issue be opened for this?
If there isn't an issue already for this please create one and link it here. I've added something in our private team tracker.
@washtubs @funkyfuture
... I'm reading that this can be implemented in docker-compose (sans Swarm)
We already have rudimentary secret support and configs could be implemented in a similar way.
Definitely a missing feature. The only "antipattern" is when you have to work around the fact that this is hard to do by other means, like for example changing the entry point script of the dockerfile, or bind mounting files into the container.
What you want is a container that is built once, preferably officially) and configurable for the use case, at the point of use, i.e. docker-compose.
As far as I can see what docker folks fail to realise is that the "Dockerfile" is the biggest antipattern in the whole docker concept, particularly since the whole thing is utterly unreadable and unmaintainable. It really makes me laugh when anyone connected with docker throws out the word "antipattern" like they would know!
The Dockerfile actually prevents the normal debugging and tidying up that would be available if you used a build script, or something actually designed for building stuff, like... a package manager, or make.
For myself I use the same DockerFile for all use-cases (making it a pattern!) suggesting that I go and change my DockerFile for every different usage, really is anti-pattern.
And no "configs support" doesnt cut it at all, imposing structure where is just isnt needed.
The fundamental problem is that if you bind mount to say /etc/nginx it has to be rw to allow scripts to run that adjust the configurations (aka. envsubst). And this then makes changes to input configuration (which needs to remain immutable)... You dont get much more antipattern than a container writing all over its configuration, so an option for copying files into the container at re-creation time is the necessary solution.
In other words, it is a bind mount directory rw in the container, but ro on the host. Seriously would it kill you to allow this?
Definitely a missing feature. The only "antipattern" is when you have to work around the fact that this is hard to do by other means, like for example changing the entry point script of the dockerfile, or bind mounting files into the container.
What you want is a container that is built once, preferably officially) and configurable for the use case, at the point of use, i.e. docker-compose.
As far as I can see what docker folks fail to realise is that the "Dockerfile" is the biggest antipattern in the whole docker concept, particularly since the whole thing is utterly unreadable and unmaintainable. It really makes me laugh when anyone connected with docker throws out the word "antipattern" like they would know!
The Dockerfile actually prevents the normal debugging and tidying up that would be available if you used a build script, or something actually designed for building stuff, like... a package manager, or make.
For myself I use the same DockerFile for all use-cases (making it a pattern!) suggesting that I go and change my DockerFile for every different usage, really is anti-pattern.
And no "configs support" doesnt cut it at all, imposing structure where is just isnt needed.
The fundamental problem is that if you bind mount to say /etc/nginx it has to be rw to allow scripts to run that adjust the configurations (aka. envsubst). And this then makes changes to input configuration (which needs to remain immutable)... You dont get much more antipattern than a container writing all over its configuration, so an option for copying files into the container at re-creation time is the necessary solution.
In other words, it is a bind mount directory rw in the container, but ro on the host. Seriously would it kill you to allow this?
Something like this:
# if file then overwrite
# if directory then overwrite/append contents of destination
# with contents from source to maintain original destination structure
# source:file:permission:owner:group
svc:
copy:
- './source/filename:/path/filename:ro:www-data'
- './source/dir:/path/dir:ro:www-data'
# or
svc:
copy:
- source: './source/file'
destination: '/destination'
permission: ro
owner: owner
group: group
- source: './source/directory'
destination: '/destination'
permission: ro
owner: owner
group: group```
Use case: We have a unorchestrated container solution where we have our application's docker-compose files incl. SSL certs etc. inside a Git-repository and pulling it onto a VM. Then we spin up the service and want to move e.g. the SSL certs, config files etc. into the container's volume. This is currently not possible without an accompanying Dockerfile with a COPY command featured. We don't want to mess around with the files inside the cloned git repo. If the application would alter the files, we would have to clean up the repo every time.
@MartinMajewski then you can mount directory with certificates as volume and point it in you application config.
Use case (and how-to question at once):
I have postgres
image with one single environment variable to be set at start: POSTGRES_PASSWORD
. I want to set it via Docker Secret. What I need to do is just put my own entrypoint.sh
that will export attached Secret into env var of running container. I need to add this entrypoint somehow into my container at launch. Without two-line Dockerbuild – I cannot. Copy of one single file – cannot be done.
PS postgres
is an example. Assume it doesn't support _FILE
env vars.
Internal tracking issue https://docker.atlassian.net/browse/COMPOSE-89
Use case: Karaf Using a karaf base image that I do not want to rebuild everytime I build my project, I want to be able to deploy my app quickly and rebuild the container for every build. However, I need to copy a features.xml and jar into the deploy directory when starting up the container.
My solution until now was to use the karaf image as a base image in yet another Dockerfile (relying on overlayfs--which runs out of overlays eventually, forcing a manual deletion of the image) and avast/gradle-docker-compose-plugin. While the init commands can surely be passed as an environment variable, the contents of the features.xml cannot. It must be stored as a file in a specific location in the container. Right now, I can only use a volume bind mount to do this. How do I get stuff into that volume on a remote machine? I need yet more logic in my build script (e.g. org.hidetake.groovy.ssh, which also complicates the build script with secret password/key logic). If a docker-compose cp were available, I could just add the necessary copy command to the docker-compose.yml. avast/gradle-docker-compose-plugin would handle building the container and copying the files from my build output directly into the container without any extra remote filesystem access logic.
This Dockerfile is added to my docker-compose.yml build portion of the script. If anything, this is an antipattern, because it just adds overlays to the upstream docker image with each build (until I am forced to manually delete the image--which makes builds much slower).
FROM myregistry:443/docker/image/karaf-el7:latest
COPY karafinitcommands /usr/local/karaf/etc/
COPY features.xml \
*.jar \
/usr/local/karaf/deploy/
I find it frustrating that docker cp works fine for runtime copying, but docker-compose has no equivalent mechanism.
I thought that the idea is to bind mount a local directory to /usr/local/karaf/deploy and drop your files in there. I would not expect to have to rebuild the image or use a docker file to aheive this.
I thought that the idea is to bind mount a local directory to /usr/local/karaf/deploy and drop your files in there. I would not expect to have to rebuild the image or use a docker file to aheive this.
It is certainly achievable that way. Reread and notice that this is purely a convenience issue: The container gets rebuilt by gradle build, the next logic step is: How do I move the new build files into the "local directory" mounted at /usr/local/karaf/deploy? In my case, a "local directory" is more accurately a "host directory" where the host is a remote host. So I have to add rsync or something else to my build script just to get files there and make sure old ones are replaced, and extra ones are removed. It would be unnecessary if docker-compose cp were available. I could utilize my existing docker client to docker daemon connection, which I have setup over port forwarding.
Docker volumes can be removed with each build. Bind mount volumes cannot. They will be repopulated only if they are empty (persistence protection mechanism). Of course, emptying a bind mount on a remote machine require certain permissions and access logic that could all be avoided with a docker-compose cp.
Again, a copy into a runtime environment can be achieved with docker cp. That is the frustrating part.
Ah, ok I'm too used to my own setup. I use http://github.com/keithy/groan a bash script that self deploys the bits and pieces to remote servers, then we invoke docker.
Use case: google cloud build and building artifacts
Artifact needed: web client (auto-generated) react graphql bindings. You need the server running to create the files needed for client compilation. The client image has the tools to create the bindings, given a server address. So you start the server image in the backgound, and now need to run the client container pointing to the server. Now how to get the generated files out of the container, and into the "workspace" host directory? Mounting directories is not allowed, since you're already in a mounted directory in a docker container. Being able to docker-compose cp
would alleviate the extra painful step of getting the container id.
Relying on $(docker-compose ps -q SERVICE)
to target the right container make it possible to use plain docker cli for such container-centric operations. Introducing a new command would for sure make it simpler for the few use-cases who ask for it, but it is not required. To avoid more code duplication between compose and docker CLI, I think this issue should be closed.
There is an open issue where the build cache between compose and plain docker is different, due to the version of the docker daemon compose is using, meaning that you need to use pure compose to not break caches in CI environments (https://github.com/docker/compose/issues/883) so until those issues are resolved, mixing plain docker commands with compose commands breaks caches. The compose config specifies all kinds of baked in config, alleviating the need to then manually specify the duplicate configuration with plain docker
commands.
Relying on
$(docker-compose ps -q SERVICE)
to target the right container make it possible to use plain docker cli for such container-centric operations. Introducing a new command would for sure make it simpler for the few use-cases who ask for it, but it is not required. To avoid more code duplication between compose and docker CLI, I think this issue should be closed.
This goes much deeper than "Few use cases mentioned" because those scenarios are fairly common and the modify, build image, modify again, build image, etc is a time sink verses being able to handle those things through docker-compose. The argument of "you can do it in the docker cli so just do it there" pretty much nullifies numerous other things that have been added to docker-compose.
This one issue has been opened for almost a year and there are numerous other discussions about it outside of this issue. It most definitely should not be closed unless it's actually resolved.
@dionjwa #883 really need to be investigated (if still relevant) as docker-compose should be aligned with docker CLI.
@jadon1979 I'm not trying to block this feature request, just noticed it has been opened more than 1 year ago, and none of the core maintainers did considered it important enough to introduce a new command, neither did a contributor propose a PR for it.
I'm just saying that, according to the feedback on this feature request, and lack of development effort to offer a "better way", the proposed workaround to use a combination of docker-compose and docker cli, which you can easily alias in your environment to keep it simple to use, is a reasonable workaround.
Now, if someone open a PR to offer a new cp
command I'd be happy to review it.
Noone contributed because everyone was told that every use case was an anti-pattern. And every few days we have new use cases posted, none anti-patterns.
On Mon, Nov 18, 2019 at 5:31 PM Nicolas De loof notifications@github.com wrote:
@dionjwa https://github.com/dionjwa #883 https://github.com/docker/compose/issues/883 really need to be investigated (if still relevant) as docker-compose should be aligned with docker CLI.
@jadon1979 https://github.com/jadon1979 I'm not trying to block this feature request, just noticed it has been opened more than 1 year ago, and none of the core maintainers did considered it important enough to introduce a new command, neither did a contributor propose a PR for it. I'm just saying that, according to the feedback on this feature request, and lack of development effort to offer a "better way", the proposed workaround to use a combination of docker-compose and docker cli, which you can easily alias in your environment to keep it simple to use, is a reasonable workaround. Now, if someone open a PR to offer a new cp command I'd be happy to review it.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker/compose/issues/5523?email_source=notifications&email_token=AAGRIF2NS64IYANNVTGFTULQUL3TZA5CNFSM4EKAVONKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEELZ6CQ#issuecomment-555196170, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGRIFY7CULCUS3TDDTTHZLQUL3TZANCNFSM4EKAVONA .
My use case isn't copying things into a container, it's copying them out of the container after it has run. This can be done from the CLI using a clunky workaround that produces arguably degraded functionality. Full details below.
I'm a DevOps engineer, and I rely heavily on containers as an alternative to the dependency hell of bare-metal build agents. When my CI system tests a repo, it starts by building from a Dockerfile within that same repo, and running all the checks (bundle exec rspec
, npm test
, etc) inside the container. If there are build artifacts created like documentation or test results, I simply copy them out of the container with docker cp
.
For integration tests, we've started to use docker-compose
to provide service dependencies (e.g. a database server) to the container running the tests. Unfortunately, the "docker CLI workaround" is less useful in this case for copying files out.
Consider this config: docker-compose-minimal.yml
version: "3"
services:
artifact-generator:
image: busybox
I'm going to create the container, run a command in that container, get the container ID, and try to extract the file using docker cp
$ # Prepare the images and (stopped) containers. In this case there is only one.
$ docker-compose --file docker-compose-minimal.yml up --no-start
Creating network "docker-compose-cp-test_default" with the default driver
Creating docker-compose-cp-test_artifact-generator_1 ... done
$ # Determine the ID of the container we will want to extract the file from
$ docker-compose --file docker-compose-minimal.yml ps -q artifact-generator
050753da4b0a4007d2bd3514a3b56a08235921880a2274dd6fa0ee1ed315ff88
$ # Generate the artifact in the container
$ docker-compose --file docker-compose-minimal.yml run artifact-generator touch hello.txt
$ # Check that container ID again, just to be sure
$ docker-compose --file docker-compose-minimal.yml ps -q artifact-generator
050753da4b0a4007d2bd3514a3b56a08235921880a2274dd6fa0ee1ed315ff88
$ # OK, that looks like the only answer we're going to get. Can we use that to copy files?
$ docker cp $(docker-compose --file docker-compose-minimal.yml ps -q artifact-generator):hello.txt ./hello-artifact.txt
Error: No such container:path: 050753da4b0a4007d2bd3514a3b56a08235921880a2274dd6fa0ee1ed315ff88:hello.txt
$ # Nope. Let's take a look at why this is
$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e2cb5d38ba0 busybox "touch hello.txt" About a minute ago Exited (0) About a minute ago docker-compose-cp-test_artifact-generator_run_dd548ee686eb
050753da4b0a busybox "sh" 2 minutes ago Created docker-compose-cp-test_artifact-generator_1
As you can see, docker-compose ps
really has no knowledge of the updated container ID. This is unfortunate. This wouldn't be so bad if there was a way for me to know that run_dd548ee686eb
was somehow related to the docker-compose run
I executed, but I see no way to achieve that.
There is a clunky workaround for this workaround, which is to add --name
to the run command:
$ docker-compose --file docker-compose-minimal.yml run --name blarg artifact-generator touch hello.txt
$ docker cp blarg:hello.txt ./hello-artifact.txt
$ ls
docker-compose-minimal.yml hello-artifact.txt
Success! ...kinda
The problem here is that if I have multiple builds running in parallel, I need to go to the trouble of making the --name
s globally unique. Otherwise, I'll get noisy collisions in the best case and corrupted results (no error, but wrong file extracted) in the worst case. So this is clunky because I now have to reinvent container ID generation rather than just using the one that Docker already created.
At a bare minimum, I'd like some way to know the ID of the container that results from the docker-compose run
command.
@ndeloof
Relying on $(docker-compose ps -q SERVICE) to target the right container make it possible to use plain docker cli for such container-centric operations.
False, see demonstration in previous comment.
We will have new use cases for years in here. Wait I mean new anti patterns...
On Fri., 13 Dec. 2019, 11:40 Ian, notifications@github.com wrote:
@ndeloof https://github.com/ndeloof
Relying on $(docker-compose ps -q SERVICE) to target the right container make it possible to use plain docker cli for such container-centric operations.
False, see demonstration in previous comment.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker/compose/issues/5523?email_source=notifications&email_token=AAGRIF2NFPTKY3QKRIXQ5RTQYONHLA5CNFSM4EKAVONKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEG2E7QQ#issuecomment-565465026, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGRIF3S4UHF5NG3VKYXJB3QYONHLANCNFSM4EKAVONA .
Who can we mention to get to the maintainers? This issue is pointless until they start to talk to us. It might be simple "it cannot be done due to current software architecture", whatever. But leaving such an issues inert isn't something you would like to see from this highly popular piece of solutions like Docker...
Our deployment builds the Docker image with bazel, uploads it to our Docker Registry, then uses Terraform docker_container
resources with upload
stanzas to copy config files to the container. I need to migrate this deployment process to use docker-compose instead of Terraform. I am surprised that docker-compose provides no function for per-container configuration.
This issue has been open for 2 years. Is this why Kubernetes is outpacing Docker in popularity? Kubernetes provies config and secrets functions. Docker Team, please at least add config functionality.
tbf docker-compose isn't exactly comparable to k8s, and not recommended for production use. It's meant for development and quick testing. docker swarm is the thing to compare to k8s and although it is also very simplistic, it does have features like configs and secrets.
If it's meant just for development then that's even more reason this issue should work. The crappy "anti pattern" rules shouldn't even be that important (I say crappy because it's clear by the abundance of normal use cases that it isn't anything resembling an anti-pattern).
On Tue, Mar 3, 2020 at 12:56 PM David Milum notifications@github.com wrote:
tbf docker-compose isn't exactly comparable to k8s, and not recommended for production use. It's meant for development and quick testing. docker swarm is the thing to compare to k8s and although it is also very simplistic, it does have features like configs and secrets.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker/compose/issues/5523?email_source=notifications&email_token=AAGRIFZTKGRWMZZ5H6DG3FDRFUSEJA5CNFSM4EKAVONKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENUBMTQ#issuecomment-594024014, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGRIF4NTQQSR2QQWPJT6PLRFUSEJANCNFSM4EKAVONA .
Another "anti-pattern":
I use docker-compose
for container orchestration during local development, and k8s for production.
Per Docker's own advice, I've implemented the wait-for-it.sh
script in order to manage service startup / shutdown order.
As it stands, unless I want to mount a volume in each service for just this one file, this requires a copy of the script in each service's Dockerfile-containing directory.
Instead, I'd like to maintain a single copy of the wait-for-it
script in a top level directory that docker-compose
then copies into each container when running locally, as such concerns are otherwise managed in k8s, meaning I don't want these concerns polluting my services' Dockerfile
s.
As Emerson once wrote: "A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines."
Perhaps it's time to listen to your users...
@Phylodome can't you use container health checks and docker-compose
depends_on
? That's how I ensure healthy container startup dependencies.
My understanding is that wait-for-it.sh
is really a hack, since your services themselves should be resilient to dependencies coming and going. Startup is just an individual case of that.
we miss a possibility to copy a file or directory using docker-compose. I find this really useful. Please check many +1 in premature closed https://github.com/docker/compose/issues/2105