vfarcic / docker-flow

Docker Flow: Walkthrough
https://technologyconversations.com/2016/04/18/docker-flow/
MIT License
187 stars 31 forks source link

Support for multiple Docker compose files #7

Open wilkko opened 8 years ago

wilkko commented 8 years ago

Sometimes multiple Docker compose files form a hierarchy to avoid duplication as described here https://docs.docker.com/compose/extends/

We for example have

First two are used when running the application locally. Files contain sensible default configuration (for example ports or application properties) and contain no variables to substitute. The last two files serve special purposes in the CD pipeline and either add, override or use the configuration from the first file.

Application is deployed to Swarm with command: docker-compose -f docker-compose.yml -f docker-compose-deploy.yml up -d

Locally you can just run: docker-compose up -d

vfarcic commented 8 years ago

I had a bad experience using multiple compose files in the docker-compose up command. Instead, in cases when it was appropriate to have multiple compose files, I tend to specify only one as part of the docker-compose up command and include targets from other files inside it (e.g. extends > file).

So, following your example, application would be deployed to Swarm with:

docker-compose -f docker-compose-deploy.yml up -d

The docker-compose-deploy.yml would extend/modify services it needs with extends: followed by file:.

Before we start adding this feature, I'd appreciate if I could get a bit more info on your use case and motivations behind using multiple compose files specified with the -f argument instead using extends inside the compose file. Can you share compose files you're using?

wilkko commented 8 years ago

Motivation was to avoid duplication. We seem to have some Spring boot application properties in the command sections. Maybe due to bad design elsewhere but they are environment specific (Datasource settings, API keys etc.) so they belong to compose files.

I'll see if I can clean up the compose files for sharing. I can also see how multiple files add complexity and should probably concider moving back to single file.

vfarcic commented 8 years ago

I also often have multiple compose files. The difference is that I tend to use only one of them when running docker-compose. If that file needs targets from another, I include it through extends.

Please note that I am not against adding the support for your use case. I'd just like to understand better the use case. Feel free to replace sensitive data in your compose files with some random values before sending them.

wilkko commented 8 years ago

I guess the worst part of multiple files per for compose up approach is that you cannot always easily see how compose is going to merge those services.

vfarcic commented 8 years ago

That was one of my problems, besides errors caused by the incorrect order of files specified in docker-compose up. Also, I had situations when people add or modify a target inside one file not understanding that it will affect the other. Many of those (and other) problems exist with one file per docker-compose up command but, at least, it is easier to open it and follow the logic.

wilkko commented 8 years ago

I don't have access to codes during the weekend so cannot post those examples yet. I wanted to try out blue green deployment this week but that multiple compose file issue prevented me.

Off topic but I did not find any installation instructions for Docker-flow. Is the recommendation just to wget it to the build slave, make it executable and fire? Can the binary be used within a container?

vfarcic commented 8 years ago

Docker Flow is a single binary (just like docker-compose). As long as it's executable, there's nothing else to do. I can put it inside an alpine container but I'm not sure there is a benefit in doing that.

However, you might benefit from making a container with docker-flow that would be specific to your use case. It can, for example, contain environment variables (e.g. proxy IP), docker-flow.yml, and so on. Besides having a single binary, the purpose would be only for storing configuration.

wilkko commented 8 years ago

Ok, additional benefits in managing the binary within container is that Docker takes care of downloading, upgrading versions, cleaning up etc. and the build slave are kept simple. I also avoid changing our Puppet managed build slaves until they are migrated to Ansible.

vfarcic commented 8 years ago

I'll make it soon (#8)