docker / compose

Define and run multi-container applications with Docker
https://docs.docker.com/compose/
Apache License 2.0
33.97k stars 5.22k forks source link

docker-compose copy file or directory to container #5523

Closed ghost closed 3 years ago

ghost commented 6 years ago

we miss a possibility to copy a file or directory using docker-compose. I find this really useful. Please check many +1 in premature closed https://github.com/docker/compose/issues/2105

jihu commented 4 years ago

@ianfixes Is "your services" meant to refer to the docker-compose services themselves, or "our" services, as in, the services written by us who uses docker-compose? I don't know if you are writing in the role of a "user" or a docker-compose developer.

ianfixes commented 4 years ago

Is "your services" meant to refer to the docker-compose services themselves, or "our" services, as in, the services written by us who uses docker-compose?

The services you build as a developer should be resilient. This is according to these docs: https://docs.docker.com/compose/startup-order/

The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.

To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.

The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason. However, if you don’t need this level of resilience, you can work around the problem with a wrapper script:

And it goes on to mention various wait-for scripts.

Phylodome commented 4 years ago

I could do a number of things. But because this is just for local development, and because I have other strategies for handling production service checks in k8s, I would prefer the simplest and least obtrusive local implementation, not generic advice from people who don't know the details of why I'd like to do this (e.g. issues w/ volume-mounting in order to perform UI dev via Webpack's dev server).

In any case, it's just another in the long list of use cases for this would-be-feature that should be left to the user's discretion.

ianfixes commented 4 years ago

I'm hearing anger directed toward me, and I understand why it would be frustrating to hear unsolicited "advice" for your approach. But I'm not even sure how to apologize; I quoted the text from the URL that you yourself referred to as "Docker's own advice", which says explicitly that the wait-for script is a way to "work around the problem". For what it's worth, I'm sorry anyway.

Phylodome commented 4 years ago

You're not hearing anger. You're hearing the exasperated tone of someone who, upon googling for what should be a fairly obvious feature, stumbled upon a hundred-comment thread in which a set of maintainers continuously patronized and rejected the community's pleas for an entirely valid feature.

I didn't share my experience here b/c I wanted an apology. I shared it simply to add to the long list of evidence that Docker users would like additional flexibility when using compose.

Of course, like any tool, that flexibility comes alongside the potential for abuse. But that same potential, if not worse potentials, exist when your users must find workarounds to solve for their specific use cases that could be solved far more simply by just adding this feature.

Phylodome commented 4 years ago

Additionally, apologetically gaslighting your users is a bad look.

ianfixes commented 4 years ago

I am neither a maintainer of nor a contributor to this project, and I apologize for any confusion there. It sounds like what little assistance I thought I could offer was unwanted and unhelpful, and I'm sorry for wasting your time.

naeem-gitonga commented 4 years ago

I want this feature for a Go container which is part of my distributed application. Since the .env file needs to be included in the root of the Go application, I'll need to create a separate .env for it...Whereas, if I had this feature, I could have my top level .env file and copy that into the Go container when I build. It would mean less stuff I need to keep track of...

My workaround could be to create this file via the Go container's Dockerfile or just make an .env file for that container. But still, anytime I add a new env var, I'll need to update it in, possibly, two places. Good use case here. Or I could just use a shell script to cp the file for me...

Myhael76 commented 4 years ago

+1 for COPY feature

we already achieve this in Kubernetes with side cars, and there are MANY use cases. This is NOT an anti-pattern, just one of the features keeping docker-compose back.

tvld commented 4 years ago

Maybe I am missing something, but right now when we are building our app for 5 minutes, all that time the build folder is "in flux", and the app will not start due to inconsistency. I would prefer to copy a build folder into a container, so when it is time to start the container it will take over the internal one. In that way the app is only offline for a second or so, when stop/start the container.

Pithikos commented 4 years ago

How is this an anti-pattern when docker already supports it? It would make sense that docker-compose follows as close docker's usability - not doing so is in itself an anti-pattern.

The problem with this is that it is incredibly short-sighted (hence the term "anti-pattern"), as it will force you to repeat the operation every time the containers to be recreated. Not to mention the fact that it scales very poorly (what if you have 10 containers? 20? 100?)

I think that is up to the developer. Simply copying a single local configuration file has insignificant overhead. Don't blame the knife.


P.S. My usecase; I want to add a config to an Nginx container in a project without Dockerfiles.

robclancy commented 4 years ago

Who even knows anymore.
I needed to setup a new project and looked for new tools, Lando is so much better than this it's crazy. Wish I used it sooner. It's faster, easier to understand, better out of the box support and doesn't have condescending (ex)maintainers/contributors.

washtubs commented 4 years ago

@chris-crone regarding your comment...

For configuration files or bootstrapping data, there is Docker Configs. These work similarly to secrets but can be mounted anywhere. These are supported by Swarm, and Kubernetes, but not by docker-compose. I believe that we should add support for these and it would help with some of the use cases listed in this issue.

Is docker-compose interested in implementing config support for parity with swarm configs?

If there is a ticket for this (or if I need to make one that's fine too), I would like to subscribe to that and unsubscribe from this trash fire. Personally I would close this and link to that, but that's just me.

frozenjim commented 4 years ago

@harpratap you are right, but the drawback is that /folder_in_container must not exist or must be empty or else it will be overwritten. If you have a bash script as your entry point, you could circumvent this by symlinking your files into the originally intended directory after you create a volume at /some_empty_location

+1 for having a COPY functionality. Our use case is for rapidly standing up local development environments and copying in configs for the dev settings.

Exactly. We don't all scale the same way. My company uses SALT to build the required .conf files for a variety of apps. One build - with the basics - then docker-compose to create the individual instances based on their unique parts: MAC address, IP, Port, Licenses, Modules, etc.. It COULD be done from a command line - but much easier and less error prone from docker-compose.

shaunryan commented 4 years ago

I have a use case. We have a test build that requires ssl to be set up. The certs are generated by a service in the docker-compose... I then to add those certs to the client containers... if I mount I lose the existing certs and I can't put it in the docker build because they don't exist yet.

Consequently I have to run 2 docker-compose - 1 to fire up the services to create the certs and then another to build the services and run the tests. Messy.

grvm commented 4 years ago

I've seen a lot of issues here, where users have suggested a lot of use cases for a feature, but they're shot down coz a maintainer thinks, it's an anti-pattern, or people would not use it or some other story.

While it might seem like an anti pattern to one person, I'm sure the 1000 people requesting for the feature, who think otherwise, need to be heard as well. If some help is needed developing the feature, I think many people can lend a hand.

My use case: In addition to the configs, I have some libraries(RPMs) that I need installed in 5 of my Rails application containers(Debian). Different Ruby/Rails versions, so can't use the same base image, so I should be able to store the files at a single location & copy them to a container when building, coz I don't want to download 1.5GB of data while building.

chris-crone commented 4 years ago

@gauravmanchanda

My use case: In addition to the configs, I have some libraries(RPMs) that I need installed in 5 of my Rails application containers(Debian). Different Ruby/Rails versions, so can't use the same base image, so I should be able to store the files at a single location & copy them to a container when building, coz I don't want to download 1.5GB of data while building.

Have you looked at multistage builds for this? I think it would be a more robust solution.

Multistage builds allow you to use the same Dockerfile for multiple images. This allows you to factor them and only include bits that you need in each image.

A good example of one is the one we use to build Docker Compose. This builds using either Debian or Alpine but allows us to factor common code.

megaeater commented 4 years ago

In our setup, we ramp up about a dozen simulators with docker-compose. The simulators are otherwise the same, but one init file is different for each target and this file is consumed on startup (gets deleted when server is up and running). Are you really suggesting that we should create about a dozen almost identical images just because one file differs? That does not make sense IMO.

With docker, the --copy-service flag can be used to achieve this. Is there any alternatives we can use with docker-compose?

chris-crone commented 4 years ago

Hi @megaeater,

we ramp up about a dozen simulators with docker-compose. The simulators are otherwise the same, but one init file is different for each target and this file is consumed on startup (gets deleted when server is up and running).

This is an interesting use case; some questions: Are these simulators (or parts of them) ever run in production (i.e.: not on the developer's machine or a CI)? If the code is open (or a similar system is) could you please link me to it so that I can take a look?

It would also be interesting to know why you would want a copy instead of bind mounting or a volume for these files?

Are you really suggesting that we should create about a dozen almost identical images just because one file differs? That does not make sense IMO.

Images are layer based for exactly this reason: all the images will reference the same layers except for the layer that includes the different files.

The issue with things like a copy on container create is that it makes taking the same code and running it in production difficult (i.e.: requiring major logic rewrite) because the pattern will be fragile or impossible in orchestrated environments.

This is not to say that we should never implement something like this in Compose. Rather that when a change means that users will not be able to reuse something that works locally in production, we like to pause and see if there is a more robust way of achieving the same goal.

megaeater commented 4 years ago

Thank you for the comment @chris-crone

We are not running docker in production, it is just for development purposes. The problem with using volume (if I understand it correctly) is that the simulator (3rd party) has this startup script which deletes the file on startup. Script execution stops if the deletion fails, so we would need to mount it as rw. And if the file deletion is allowed, we would need to have a mechanism to create a temporary directory for supplying these files so that the originals would not get deleted. So we would need to have some kind of extraneous scripts to ramp up the composition on top of docker-compose.

grvm commented 4 years ago

@chris-crone Thank you for the links. I'll take a look and see if it works for us 👍

grvm commented 4 years ago

Hey @chris-crone I did try using Multi Stage builds, and it did help us keep the libraries/config at 1 location only and copy them around, but now there are issues with .dockerignore being ignored, no matter where I place it.

It works when I'm just using Docker with the new DOCKER_BUILDKIT option, but doesn't work when using docker-compose, tried COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose build, but still didn't work. Any ideas?

I was wondering, if there was an option to specify where to look for the .dockerignore in compose, when I stumbled upon this issue https://github.com/docker/compose/issues/6022, which again, was closed, coz 1 contributor thinks this is not useful.

This is pretty frustrating if I'm being honest here!!

TrentonAdams commented 4 years ago

This is critical on MacOS, because getting your development cycles as close to production as possible is of paramount importance; obviously for proper Continuous Delivery practices. e.g. build the container, but then bind mount your new version of the software that you're currently working on into the container to save on build cycle times. Unfortunately, bind mounts are extremely costly, being 3 to 5 times slower.

As an example, startup time of tomcat is about 3s for my app in a container. Add a bind mount of ~/.bash_history and it's 4s. Add a bind mount of my app and it's usually about 18-20s. In Linux bind mount performance is like that of a local file system, but not in MacOS. Scale that to 100 times per day and that's significant.

That's not including the slowness that continues when accessing the app for the first time; until the code files are cached. For me, that means 3 minutes, including lag over the internet connecting to the monolithic oracle db to change a small phrase to something else, and see if it's still looking alright. Damn covid-19, lol.

Ideally, I'd like to be able to just run docker-compose again and "update" my app in the running container, and ask tomcat to reload. I could use the tomcat manager to upload the change, but we also have a back-end app that doesn't use a managed container of any kind, so we'd then have to use a different solution to that.

It'd be nice if docker-compose was geared towards development too, not just a production deploy.

ianfixes commented 4 years ago

This use case is relevant to the discussion: https://github.com/docker/compose/issues/3593#issuecomment-637634435

Marandil commented 4 years ago

@chris-crone

@gauravmanchanda

My use case: In addition to the configs, I have some libraries(RPMs) that I need installed in 5 of my Rails application containers(Debian). Different Ruby/Rails versions, so can't use the same base image, so I should be able to store the files at a single location & copy them to a container when building, coz I don't want to download 1.5GB of data while building.

Have you looked at multistage builds for this? I think it would be a more robust solution.

Multistage builds allow you to use the same Dockerfile for multiple images. This allows you to factor them and only include bits that you need in each image.

A good example of one is the one we use to build Docker Compose. This builds using either Debian or Alpine but allows us to factor common code.

Multistage builds are cool, but they suffer from their own issues, for one you have to run all stages within the same context, which is not always possible. Also, as far as I know, you cannot easily use COPY --from with images defined in another Dockerfile and built with docker-compose build (I assume you could do so by building and tagging them manually).

COPY in itself is very limited in that you can only import from your build context. docker cp can copy from anywhere to anywhere, except it cannot copy between image and container (sort of like COPY --from).

My own use case is a bit different (apart from copying read only config files, local volume mounts are not the best idea when you deploy to another machine) and I would say that what I'm doing right now is an antipattern... . I have potentially several different images that on build generate compiled and minified JS + HTML + assets bundles (think small angular apps), and a single nginx server that serves all of them (n.b. built from a custom image because of plugins).

Currently, what I have to do is to copy the "deploy" packages from the "build" images on startup. Ideally, this should be done either on container create, or on build, but the latter would require creating another image on top of the "modded nginx".

Image the following project layout (subprojects may live in separate repositories and not know about each other):

app1/
  src/
    ...
  Dockerfile
app2/
  src/
    ...
  Dockerfile
app3/
  src/
    ...
  Dockerfile
nginx/
  ...
  Dockerfile
docker-compose.yml

Each of files app{1,2,3}/Dockerfile contains a target/stage build that build the app to /usr/src/app/dist. nginx/Dockerfile has one stage only and build an image similar to nginx/nginx, but with all required plugins (no configs).

docker-compose.yml:

version: '3.8'
services:
  app1-build:
    build:
      context: app1/
      target: build
    image: app1-build
    entrypoint: ["/bin/sh", "-c"]
    command:
      - |
        rm -vfr /dist-volume/app1 \
        && cp -vr /usr/src/app/dist /dist-volume/app1 \
        && echo "Publishing successful"
    volumes:
      - 'dist:/dist-volume'

  app2-build:
    build:
      context: app2/
      target: build
    image: app2-build
    entrypoint: ["/bin/sh", "-c"]
    command:
      - |
        rm -vfr /dist-volume/app3 \
        && cp -vr /usr/src/app/dist /dist-volume/app3 \
        && echo "Publishing successful"
    volumes:
      - 'dist:/dist-volume'

  #... same thing for app3-build

  nginx:
    build:
      context: nginx/
    image: my-nginx
    volumes:
      - 'dist:/var/www'
      - # ... (config files etc)

volumes:
  dist:

Now, this is obviously non-ideal, each app-building image is unnecessarily ran and finishes quickly, the deployed images reside on a shared volume (I'm assuming this has negative performance impact, but I couldn't verify it yet). If a copy or copy_from was a docker-compose option, the same could be written as:

version: '3.8'
services:
  # assuming the images have default entrypoint and cmd combination that immediately returns with success.
  app1-build:
    build:
      context: app1/
      target: build
    image: app1-build

  #... same thing for app2-build app3-build

  nginx:
    build:
      context: nginx/
    image: my-nginx
    copy:
      - from: app1-build  # as image or service, both have their pros and cons, service would mean an image associated with this service
         source: /usr/src/app/dist
         destination: /var/www/app1
      - from: app2-build
         source: /usr/src/app/dist
         destination: /var/www/app2
      - from: app3-build
         source: /usr/src/app/dist
         destination: /var/www/app3
    volumes:
      - # ... (config files etc)
itscaro commented 4 years ago

My use case is not in the build step or startup step. I'm fetching files generated inside a container or all container of a service, these container are executed on a remote Docker Engine. So far I find myself doing something like docker-compose ps -qa <service> | xargs -i docker cp {}:<there> <here>. I just wish I can stick to docker-compose uniquely in my script.

TrentonAdams commented 4 years ago

@chris-crone

It would also be interesting to know why you would want a copy instead of bind mounting or a volume for these files?

Do you enjoy self flagellation? If so, I recommend running an application using a bind mount on MacOS. 🤣 See my previous post for the details.

TrentonAdams commented 4 years ago

This is not to say that we should never implement something like this in Compose. Rather that when a change means that users will not be able to reuse something that works locally in production, we like to pause and see if there is a more robust way of achieving the same goal.

@chris-crone I think this is a great sentiment, because all too often people get into implementing anti-patterns for docker, such as not managing configuration and data in an ephemeral way.

I wonder if we could somehow get docker and Apple to work together on fixing performance problems with bind mounts. For me at least, I'd have no more need for a docker compose cp option, because I'd be using bind mounts for development. Right now though it's just waaaay too painful to use bind mounts. I may switch to a virtual machine with Linux, cause my Mac just bytes.

chris-crone commented 4 years ago

@megaeater

We are not running docker in production, it is just for development purposes. The problem with using volume (if I understand it correctly) is that the simulator (3rd party) has this startup script which deletes the file on startup. Script execution stops if the deletion fails, so we would need to mount it as rw. And if the file deletion is allowed, we would need to have a mechanism to create a temporary directory for supplying these files so that the originals would not get deleted. So we would need to have some kind of extraneous scripts to ramp up the composition on top of docker-compose.

Hmm.. If you could engage with the simulator vendor, I think that is the best way of fixing this issue. You could maybe work around this with an entrypoint script for the simulator that moves the files as required; granted this would be messy.

@gauravmanchanda

it did help us keep the libraries/config at 1 location only and copy them around, but now there are issues with .dockerignore being ignored, no matter where I place it. It works when I'm just using Docker with the new DOCKER_BUILDKIT option, but doesn't work when using docker-compose, tried COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose build, but still didn't work. Any ideas?

Glad multistage builds helped! What version of Docker and of docker-compose are you using? I would try with the latest and see if the issue is still there. It should respect the .dockerignore file.

@Marandil, it sounds like docker build isn't handling your project structure (i.e.: directory structure) which is the issue. You might be able to use something like docker buildx bake (https://github.com/docker/buildx) to solve this use case. Note buildx is being worked on so isn't super stable yet but it aims to solve some of what you're hitting.

@itscaro, thanks for your input! What we do internally to generate things in containers is use docker build to output the result from a FROM scratch image. This only works in cases where you need a single container's output.

@TrentonAdams we have been working on improving filesystem performance for Docker Desktop but it is tricky. The underlying issue is traversing the VM boundary. The file sharing bits have recently been rewritten (you can enable the new experience using the "Use gRPC FUSE for file sharing" toggle in preferences) and this should solve some of the high CPU usage issues that people had been seeing. We have some documentation on performance tuning here and here.

Marandil commented 4 years ago

@chris-crone

@Marandil, it sounds like docker build isn't handling your project structure (i.e.: directory structure) which is the issue. You might be able to use something like docker buildx bake (https://github.com/docker/buildx) to solve this use case. Note buildx is being worked on so isn't super stable yet but it aims to solve some of what you're hitting.

Thanks, I'll look into docker buildx bake. It looks promising, but I couldn't find any good reference nor documentation for it, and the pages on docs.docker.com are rather bare (cf. https://docs.docker.com/engine/reference/commandline/buildx_bake/). So far I found https://twitter.com/tonistiigi/status/1290379204194758657 referencing a couple of examples (https://github.com/tonistiigi/fsutil/blob/master/docker-bake.hcl, https://github.com/tonistiigi/binfmt/blob/master/docker-bake.hcl), that may be a good starting point, but hardly a good reference.

TrentonAdams commented 4 years ago

@TrentonAdams we have been working on improving filesystem performance for Docker Desktop but it is tricky. The underlying issue is traversing the VM boundary. The file sharing bits have recently been rewritten (you can enable the new experience using the "Use gRPC FUSE for file sharing" toggle in preferences) and this should solve some of the high CPU usage issues that people had been seeing. We have some documentation on performance tuning here and here.

@chris-crone Hell yes, thanks so much! There is a 3-4s improvement with the new option, and using "cached" gives me the same performance as running outside of the container, so this is HUGE for me. I'm seeing times as low as 2800ms startup time for our app, so that's not 11-18s anymore. YAY! I don't need anything other than cached, because I'm just re-creating the containers every time anyhow.

TrentonAdams commented 4 years ago

@chris-crone Is there a place I should post performance stuff for helping with the performance tuning and feedback on MacOS? I'm wondering why a freshly started container with bind mount would be slow when not using cached. There must be some weird thing where it's going back and forth checking every file on startup if they are in sync, even when it's brand new?

alicederyn commented 4 years ago

Use-case: I run a container and it modifies a file (specifically, Keycloak modifies its configuration file based on environment variables etc). I want a copy of that file on my local disk so I can check the outcome of that modification, and track my progress over time as I modify the container scripts. Currently, I need to find the new container ID each time so I can use docker cp.

agreenspan commented 4 years ago

Use-case: developing in docker. i need to back propagate my lock file to the host machine or it get overwritten when the container mounts the project folder.

soulseekah commented 3 years ago

Use case: I need to copy a file containing a secret key. The app that runs inside the container reads that file into memory and deletes it from disk.

ghost commented 3 years ago

Use case: I am running c++ unit tests in a docker container. I want to simply copy over the code to an existing image each run.

1) Doing this with a separate dockerfile COPY means the code gets written to a new, unnecessary image and I need to delete that image to ensure the next run creates a new image with the latest code.

2) Doing this with docker-compose volumes yaml config means Docker chowns the source code as root:root (totally killing my IDE from making edits till I chown it back!)

@shin- am I following an anti-pattern by running unit tests in a container? What's the non-anti-pattern way you would solve this?

.... I am sticking with option 1 as it is the least pain. But I see docker-compose supporting a copy config being such an awesome enhancement! at least for this workflow!

ChristophorusReyhan commented 3 years ago

@soulseekah Isn't using secrets in compose better for that use case?

ChristophorusReyhan commented 3 years ago

I found a workaround for that works for me:

  1. Create the Dockerfile with COPY a_filename .
  2. Build the image using the Dockerfile docker build -t myproject:1.0 .
  3. Edit the docker-compose to use the image you just built
    version: "3.7"
    services:
    app:
    image: myproject:1.0
    ports:
      - 3000:3000
    networks:
       - mynetwork
       - internal
    environment:
      MYSQL_HOST: mysql
      MYSQL_USER: root
      MYSQL_PASSWORD: not_so_secret_password # don't do this 
      # https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/
      MYSQL_DB: appdb
    deploy:
      resources:
        limits:
          cpus: '0.75'
          memory: 100M

Not a perfect workaround, but it works in my use case.

soulseekah commented 3 years ago

@soulseekah Isn't using secrets in compose better for that use case?

Unfortunately that requires swarm last time I tried :(

ChristophorusReyhan commented 3 years ago

@soulseekah Isn't using secrets in compose better for that use case?

Unfortunately that requires swarm last time I tried :(

@soulseekah Maybe use workaround that I use (above you)?

evbo commented 3 years ago

@ChristophorusReyhan the problem with that work around is indicated in @zoombinis comment:

Doing this with a separate dockerfile COPY means the code gets written to a new, unnecessary image and I need to delete that image to ensure the next run creates a new image with the latest code.

While a working solution, it can lead to some unwanted maintenance. For instance, to cleanup the unwanted image while also preserving any images you care about:

docker-compose up && docker-compose down --rmi local

But make sure all images you care about have a custom tag and the test/dummy image does not

Kreyren commented 3 years ago

Any update on this?

I currently have a project which would benefit from this implemented as i have to use an ugly workaround to get directories from one container to another.

peter-hartmann-emrsn commented 3 years ago

Use case:

  1. Run backup task within container via: docker-compose exec influxdb influx backup –database weather “/backup/weather.backup”
  2. Get the backup file off the container e.g.: docker-compose cp influxdb:/backup/weather.backup ./weather.backup
  3. Move the backup to another container, e.g.: docker-compose cp ./weather.backup influxdb:/backup/weather.backup

This then will be part of an admin script that helps moving backups between compose deployments. Such script then would look very clean. Mainly used during development and evaluation.

hoemich commented 3 years ago

I still wonder, why this should be an antipattern. To me it would be just a solution to either:

  1. Build new Images that crap your memory for simple changes to files that should be a "working copy"
  2. Mount and have a cp be the first command in your entry point to create a working copy from the mounted volume to the real target. While the first solution is totally unusable as it requires a docker prune images multiple times a day, the second works, is fine but having it in the docker-compose would just #suckless

Just like the wait-for-it.sh script is fine, but having that possibility built into docker-compose would still be a great improvement in expressivness.

viafcccy commented 3 years ago

Can add copy! When container start can copy some file to host machine!

ndeloof commented 3 years ago

Closing this as implemented in Compose v2

daniel-shuy commented 3 years ago

Yay! Any example of how the syntax would look like?

ndeloof commented 3 years ago

same as docker cp but targetting services:

 docker compose cp SERVICE:SRC_PATH  DEST_PATH
daniel-shuy commented 3 years ago

Nice! Will there also be a way to define this in the Docker Compose file?

robclancy commented 3 years ago

compose officially an anti-pattern enabler, thank god.