Open peebles opened 7 years ago
Can you provide an example for this?
I am using docker daemon on AWS version 17.03.0-ce and docker client on my laptop 1.9.1. I am using docker-compose 1.5.1 with the "version 1" syntax. I am using rancher/convoy as a volume plugin, which creates persistent volumes with AWS EBS.
In my docker-compose, I have something like:
volume_driver: convoy
volumes:
- data:/opt/graphite/storage
During build time, that volume does not exist. Only when doing the "up -d" is the volume created and mounted on the server. In addition four other EBS volumes are created corresponding to the directories listed in the VOLUME directive, and are mounted into the container. These have random looking strings as volume names. They are mounted over the data that is already there, so that /opt/graphite/conf and /etc/nginx for example, are empty.
And now that I think about it, even though it appears to work in "ordinary" circumstances, I am not sure what it even means to do something like:
COPY nginx.conf /etc/nginx/nginx.conf
VOLUME ["/etc/nginx"]
Wouldn't the image have a file in /etc/nginx? What does it mean then at runtime when a consumer decides to do something like "volumes_from"? Wouldn't the content of /etc/nginx always get overlayed with something else?
When using a docker volume plugin, the volumes get created at run time, and get mounted over the directories listed in VOLUME, so none of the installed files are there any more!
All I did was remove the VOLUME directive in Dockerfile.