Closed ghost closed 3 years ago
What's the usecase? Most of the suggested usage I've seen were antipatterns.
You can see some of many usecases clicking at link provided. As you can see many of subscribers consider it as really useful feature instead of "antipattern"
ooops, now I see "something" happened to issue #2105 as there are no comments at all anymore... Perhaps I provided wrong link...
so, I find really useful to copy some configuration/initialization files to container. As example some *.sql stuff for db containers, some html/js/css content for apache/nginx containers or even jar file for java container. This will make it available/runnable "globally" not only on machine where it was composed as in case of mounting volume(s). Mainly this will be some combination of host-local and container-contained files. In fact any container can be considered useless without any configuration or initialization
this is correct link: https://github.com/docker/compose/issues/1664
+1
This will make it available/runnable "globally" not only on machine where it was composed as in case of mounting volume(s)
The problem with this is that it is incredibly short-sighted (hence the term "anti-pattern"), as it will force you to repeat the operation every time the containers to be recreated. Not to mention the fact that it scales very poorly (what if you have 10 containers? 20? 100?)
The actual solution to your issue is to include those necessary files in your build (Dockerfile) and rebuild when an update is needed.
of course, if it is composed including all "shared" content into container, scaling 10-20-100- containers would be much easier. Everything you need is to pull it from repository and mount(yes, in this case mount) only node-specific config. And even more, you don't need run docker-compose on each node. Sure we can use docker-compose in combination with build: & Dockerfile, however things become little more complex and yaml configuration in docker-compose is much more "elegant" :o)
I'm running into an issue where copy would come in handy (at least as an override). I mostly develop on mac so I almost never see an issue with commands running as root in the container and exporting to a mounted volume. However, recently using the same workflow on a CentOs has caused some major pain because files owned by the root user are being added to the host via the mounted volume. I would like in these cases to just be able to copy the host files to the container instead of mounting them.
The related issue: #1532
I think in my case I can get away with using COPY in the Dockerfile and having multiple docker-compose files one of which uses a volume mount.
Use-case: I want to use directory from read-only file system inside container. Application creates new files in that directory, but because filesystem is read only this cause errors.
I can't use rw volume, because filesystem is read only. I can't use ro volume, because effect will be the same.
It would be awesome to make writes that are persists only when container runs. I can make wrapper (https://stackoverflow.com/questions/36362233/can-a-dockerfile-extend-another-one) to only COPY
files, but making this in compose, similar to volume
, would be better
Use case: starting multiple docker containers simultaneously from .gitlab-ci.yml which need to write into the git repository's directory.
If the process inside a container fails or if the ci job is cancelled before the container has cleaned up after itself, the remaining files can't be deleted by gitlab-runner due to lack of permissions. Now I could copy the files within the container out of the volume into another directory, but that would be an antipattern, wouldn't it?
Is this different from volumes: - ./folder_on_host/ :/folder_in_container/
?
I am able to copy files from host to container (equivalent of COPY) this way in my compose file
@harpratap you are right, but the drawback is that /folder_in_container must not exist or must be empty or else it will be overwritten. If you have a bash script as your entry point, you could circumvent this by symlinking your files into the originally intended directory after you create a volume at /some_empty_location
+1 for having a COPY functionality. Our use case is for rapidly standing up local development environments and copying in configs for the dev settings.
+1 for COPY. This would really be a helpful feature.
Use case: in swarm mode, I have a service using mysql image. I need to copy my initialization scripst in /docker-entrypoint-initdb.d/ so that MySQL can execute them.
Though it is possible to create an image on top of mysql, copy the files and use it or connect to the mysql task in swarm and then manually run the scripts, it's kinda unnecessary in my opinion.
+1 for COPY/ADD,
Use Case: Fluentd requires the configuration files to be moved into the container during run time. These config files are created on the run time by our Jenkins Engine and without a COPY/ADD in docker compose it simply fails.
+1 for COPY
Suppose one has a shared config file across a number of docker machines, with their Dockerfiles in respective subdirectories under the docker-compose directory. How do you copy that shared config into each image? I can't symbolically link to ../
from the Dockerfile context without getting COPY failed: Forbidden path outside the build context
In this instance when running docker-compose build, I'd like to copy the config files from the docker-compose context prior to running the docker build steps.
I'm happy if someone can suggest a clean workaround of course.
This would be nice to have feature !!
Please don't comment with just +1 - it's a waste of everyone's time. If you have additional information to provide, please do so ; otherwise, just add a thumbs up to the original issue.
What is the use of dogmatically insisting it is antipattern, just because in some cases it could eventually cause problems? This definitely has a good use as you could add one line to an existing file, instead of having to create an extra folder and file, then move the file to be added there. This pointless, bureaucratic creation of tiny files is the real antipattern, preventing users from creating simple and easy to maintain docker-compose files.
If users want to do harmful things with Docker, they will find a way no matter what you do. Refusing to add legitimate features just because someone may misuse them one day is foolish.
I think what you are doing is actually the right way to go about it, in this instance.
The issue here that was raised was more like, suppose that the mongo.conf file was shared between three docker images which are orchestrated by one docker-compose file. How do you ensure that it is the same in each docker build subdirectory?
If you use symbolic links for instance, docker complains that the file is external to the build environment, e.g. the docker build lacks a sense of reproducibility as modifications outside that directory could alter the build.
So the only way to orchestrate this is with a file copy, which one currently needs to do with a Makefile or shell script prior to running docker-compose, so it seemed like an idea to discuss whether this was a feature that docker-compose could do, as surely it's a common use case.
The issue you are raising seems to be more about runtime (launch-time) injection of a local file modification.
I think you're actually fine in what you're doing, what you've said above is just how it's done. A docker image can always be constructed to accept environment variables to answer questions such as where is the config directory, and that config directory can be "injected" using a volume at runtime - but that is up to the design of the docker image, leveraging environment variables and volume mappings (which are features docker supports as runtime config modification.)
I hope I haven't misinterpreted your comment, and that my reply is helpful.
@jpz - I somehow deleted my original comment - yikes - sorry! Thank you - yes, that's helpful.
My original comment was along the lines of:
My use case is that I want to declare a service using mongo
without having to create my own custom image just to copy over a configuration file like /etc/mongod.conf
.
UPDATE: I used volumes
. A year or two ago - I thought I had tried this with a bad experience... but it seems fine.
+1 for COPY
I created a quick gist for this. It assumes the docker compose service is named phpfpm
, however you can change this to whatever you wish. feel free to modify.
https://gist.github.com/markoshust/15efb29aa5eebf8adae402af18b2e674
Hello, I would like to know how is the progress for this issue. Now, I'm using windows 10 home with docker-toolbox. It seems mostly error when I try to bnd mounting file as a volume into container. It would nice to have COPY capabilities in docker-compose
COPY/ADD would definitely be a welcome feature.
A usecase: running a Graylog instance in Docker for Dev purposes. In order to launch an input automatically, a JSON spec has to be put in /usr/share/graylog/data/contentpacks With the COPY/ADD feature, it'll be as easy as single line in YML.
In order to get it working now (on Oct 16, 2018), need to mount a volume to that point AND copying the original content of that folder to the persistent volume. Which is quiet inconvenient.
I would benefit from that, i have a set of tools that import a database seed into a container and then i run the devtools database importer based on that file. I don't want to have to do:
docker cp "${seed_file}" $(docker-compose ps -q devtools):/tmp/seed_file
to be able to import my seed. And no, i will not compile my dev images with a fixed schema, this goes against web development pattern at the very least. Containers should be for app portability, not data.
It would make way more sense to do:
docker-compose cp "${seed_file}" devtools:/tmp/seed_file
All in all, it is just a short-hand that basically does the same thing, but it looks better to leverage docker-compose
everywhere than to mix stuff...
1) this seems to be a duplicate of #3593
2) i agree with @shin- that the elaborated use-cases are following an anti-pattern
3) but wrapping up Docker's cp
command makes sense, imo
@funkyfuture If you think that these use-cases follow an antipattern, then please suggest a solution that does not.
What about k8s-like "data section" ? For example:
services:
service1:
image: image.name
data:
filename1.ini: |
[foo]
var1=val1
[bar]
var2=val2
filename2.yml: |
foo:
bar: val1
or perhaps the same but for the volumes:
section
volumes:
service_config:
data:
filename1.ini: |
[foo]
var1=val1
[bar]
var2=val2
services:
service1:
image: image.name
volumes:
- service_config:/service/config
@shin-
The problem with this is that it is incredibly short-sighted (hence the term "anti-pattern"), as it will force you to repeat the operation every time the containers to be recreated. Not to mention the fact that it scales very poorly (what if you have 10 containers? 20? 100?)
The actual problem here is that some people are too quick to diss requested features because it conflicts with their limited vision of actual use case scenarios.
Here I am looking for a way to copy my configuration file into a container which I just got from dockerhub. I dont have access to the original Dockerfile and it would be a great convenience to have this feature, (instead of trying to build another layer on top, which would work, but is inconvenient, I dont want to rebuild when I change something).
Use case:
I run a database in an integration test environment and want the data to be reset on each iteration, when the containers are started. Embedding the data in a custom image would work, but mounting a volume is cumbersome - because the data on the host must be reset.
We maintain the data independently and it would be most convenient to just use the standard database image - copying data to it before it starts running. Currently this does not seem to be possible with docker-compose.
I have a use case in mind. I want to base my image from an off the shelf image, such as a generic Apache server. I want to copy my html during image creation. That way I can update my base image whenever I want and the copy directive will ensure my content is included in the new image.
BTW I currently use dockerfiles and a build directive in my docker-compose.yaml to do this. It would be nice if I didn't need the docker files.
@tvedtorama -
Use case:
I run a database in an integration test environment and want the data to be reset on each iteration, when the containers are started. Embedding the data in a custom image would work, but mounting a volume is cumbersome - because the data on the host must be reset.
We maintain the data independently and it would be most convenient to just use the standard database image - copying data to it before it starts running. Currently this does not seem to be possible with docker-compose.
This issue discusses the desire to copy files at image build time, not at runtime. I would suggest raising a separate ticket to discuss the merits of that? It may confuse this discussion to digress into discussing runtime file injection (which I interpret what you are talking about.)
@c0ze -
What about k8s-like "data section" ? For example:
...
I'm not fully up to speed with what that config does, but yes, that looks like it would be a solution. Fundamentally when you have secrets (e.g. what is the login username/pwd/port to the database), how do I inject that into my docker images - clients and servers - without writing a load of code?
Something like kubernetes data section could work - as it would be a single-source of truth. Otherwise one may find they have the same secrets maintained multiple times across multiple docker-images.
There's also prior art there, which helps to move the conversation along to whether this is actually a good idea worth adopting or not.
For me, this all started with wanting to share an invariant config file across containers, and realising there was no way to do it without scripting externally to docker-compose, and writing the config from a single-source-of-truth into each of the Docker folders beneath the docker-compose folder. Of course I get the immutability argument for Docker (e.g. the Dockerfile directory fully and completely describes how to build the image) so asking for automation to copy things into that directory looks like it slightly flies in the face of those principles.
I guess the discussion is how intrusive is docker-compose allowed to be? Is this a common enough use-case to justify such automation? If it is not, then we appear to burden the environment variable passing mechanisms with the responsibilities for injecting secrets from outside in from a single source of truth, late (e.g. at runtime.) I hope my points are coherent enough here.
This is not of great import to me, but I think the use-case is worth to discuss.
It would be extremely useful to me. At work, the virus software blocks the ability for windows 10 to share volumes with containers. It is a huge org and it's a non-starter to get them to change due to a policy set on another continent.
Hello, my use case: I'm using open source Prometheus docker-compose setup (repo is maintained by other people). It has configs that are mounted into containers. ISSUE: I can't do docker-compose up on remote machine (like aws docker-machine or inside of CI/CD runner) 'cause it can't mount configs properly. In this case I'd like to copy/embed them. For RW data there are volumes, for RO - ?
Having RO volumes with possibility to set initial data is the the other option.
Current solution: connect to docker host via ssh, clone/update repo and run docker-compose up. This works for manual case, but it's pain for automation :(
+1
Use-case: I have a development docker machine that runs a database and whenever I set it up I need a recent dump of the database to be installed. Effectively that means:
Now the big problem is that step 2 will always be different for each developer, because there are many different dump versions of that database, so the easiest would be if each developer has his own compose file with their specific dump location/version, and then have docker assemble the image with that specific file location while composing, that can then be also changed on the fly when a different version is required.
My use case is simple. I don't want volumes nor do I want to roll my own image. I just want to put a simple defensive copy of a config file in a container after it's created and before it's started.
is this still a issue? I have a django application with a very long settings file. For me it would be just way easier to create a docker image and copy a single configuration file to each container. Passing all the settings as ENV is for me the antipattern. Takes a lot of code, is difficult to maintain and could be solved with a single copy command.
I opened #6643 and would love feedback on how it would be considered an anti-pattern. Especially, in an environment where numerous configuration files could have a need to be added/modified on-the-fly.
@shin-
The problem with this is that it is incredibly short-sighted (hence the term "anti-pattern"), as it will force you to repeat the operation every time the containers to be recreated. Not to mention the fact that it scales very poorly (what if you have 10 containers? 20? 100?)
How does work docker-compose exec
with multiple containers ?
--index=index index of the container if there are multiple
instances of a service [default: 1]
Shouldn't we try to get the same behavior with cp
?
IMHO exec
is as much ephemeral as cp
would be. But I always consider it "development" commands anyway, development environments must be ephemeral shouldn't they ?
I hadn't seen the comment about a lot of devs here saying that they are short-sighted by trying too quickly fix this by requesting this feature. I think this is a little harsh and condescending. If there is one thing i've learned from my years of development it is the following:
It's not what your software does, it's what the user does with it that counts
Obviously, i understand that you have a role to prevent things from going crazy, but it's not because someone uses a tool incorrectly based on your vision that everyone will start to do it that way and all hell will break loose.
All of the special cases i've seen here are very appropriate most of the time. And, most of these special cases shouldn't and wouldn't happen on production system, they are, for example, like my case that i explained a while ago, to customize a development environment and run special files in a container that cannot use a volume mapping. Most examples say clearly they don't want to bake in schemas, data, or config files and cannot use volume mapping so i don't see why this is so much aof an inconvenience as too use the term "Short-Sighted".
I think you should carefully weight your words when saying things like that...
Let's bring it back. Honest technical question here. With docker stack we have a "configs" option. That's a native docker feature but it's for services, not containers. What's the viability of getting something like that working at the container level rather than the service level? How does docker stack implement config provisioning? Can that implementation be replicated for docker-compose specifically?
At least half the use cases mentioned here are about configs, so many people would be satisfied if just that itch were scratched.
Another simple use case is things like googles domain validation. If you use the wordpress image you can't add a file that google will check for. You need to make a whole new image to do it.
Also these comments saying things are "anti-pattern" barely make sense, reeks of elitism.
EDIT: yikes, read more, thank god he isn't the maintainer anymore
So you're telling me that if I want to copy a tiny config file into a prebuilt image (say, nginx
or mariadb
), I now need to manage my own image build setup and duplicate the disk space used (original image and configured image)?
This ought to be a feature.
duplicate the disk space used
you're not when you're using Docker.
I like how you nitpick one thing out of what he said which is the most minor thing in all of it. This should be a feature. This issue will just grow and grow because of people getting here as docker grows since it is a common use case and people will just expect it exists because of common sense, something the maintainers ex and current here seem to lack.
I like how you nitpick one thing out of what he said which is the most minor thing in all of it.
an invalid argument should be noted as such.
i think the thing here is that the "anti-pattern" argument can be valid given a certain business strategy (see @washtubs point). we may not agree with this strategy, but that doesn't justify personal attacks. in the end it's @shin-'s past efforts with docker-py
that would allow you to implement an alternative to docker-compose
.
we miss a possibility to copy a file or directory using docker-compose. I find this really useful. Please check many +1 in premature closed https://github.com/docker/compose/issues/2105