Closed tiangolo closed 4 years ago
I think it adds mental complexity, just trying to grasp the idea of what are the options and environments is already difficult.
I agree π Running the Docker environment was relatively easy because there is good documentation.. but figuring out how the Docker environment was configured to then customize it further was quite complicated.
FWIW I started with this project template when building the farmOS Aggregator. We ended up consolidating the Docker Compose files similar to what you're describing (we also switched to an NGINX proxy & dropped some containers like celery & pgadmin):
docker-compose.deploy.yml
docker-compose.dev.yml
docker-compose.test.yml
docker-compose.shared.yml
This is working well so far. If you look closely I think they could be cleaned up a bit further... (notably the few extra .deploy.*.yml
files) but OK for now. We are doing some trickery so that the same AGGREGATOR_ENV
variable can configure both the backend
and frontend
- modifying the Docker Compose structure made this much easier to accomplish.
Hi @tiangolo ,
I can first confirm that there is a mental step to cope with before getting the trick of the composition. I have 2 juniors in my team, and they still haven't fully mastered all the threads of the composition.
But mostly because of the complexity of the docker-composition-world itself, when you think on all the layers you have to go through from the orchestrator / proxy / backend. Plus the scripts to build / push / deploy appropriately given environments variables.
Hence my first feedback : it is not necessarily the number of docker-compose files which are an issue. My team actually feels they have a better grasp of the inner mechanisms with the current split layout (when they must dive into it).
In order to simplify (help them), my choice has been to use a Makefile, which creates (yet another) abstraction layer, but that exposes the commands that they need for the release process. make pull up logs
, make push-dev push-qa ...
, make deploy-dev deploy-qa ...
, make create-release
. At the end of the day, it is the reason why we are using the composition: to serve the purpose of an efficient and robust CI/CD. (But I acknowledge it has not really helped in the understanding of what goes under the hood).
The makefile takes care of picking up the appropriate docker-compose files. We therefore have .env-dev
, .env-qa
, and their companions, which are the only files we need to maintain / adapt. I personally like it this way because
but there are even more files than before π
And probably because of Makefile and .env files. We could have a python configuration along with python commands that would take care of this.
I do not have a solution right now, but here is an idea for brainstorming : use some other tooling than docker-compose config
and bash
scripts to arrange / wrap the configurations. We use click in our CLI projects for instance. The red-line would be to keep the advantage of deduplication, but provide a smoother control upon them
@tiangolo
For me, the deciding factor is that a range of people come to this project and some might be intimidated by the complexity. Anything that simplifies onboarding and makes the project more intuitive, without making things less useful, allows people to use these resources to do amazing things.
At first, I liked the idea of keeping the compose files separate for the same reasons you mentioned, but onboarding to the project that way was definitely difficult, even with all the great documentation and support here.
I ended up bringing the compose files together by environment and pipeline stage to get everything up and running the way I needed:
docker-compose.local.test.yml
docker-compose.local.build.yml
docker-compose.local.deploy.yml
docker-compose.remote.test.yml
docker-compose.remote.build.yml
docker-compose.remote.deploy.yml
I could add a set of shared files too, but managing the duplication hasn't been a huge headache. That said, I want to manage frontend vs backend deployment and need to split the files again. Who knows where the splitting ends but it's definitely a lot easier to break the files out into separate ones vs putting them all together.
I also wanted to comment on @ebreton because there are some excellent ideas there. I think a Makefile or a CLI to leverage the flexibility of separate or combined compose files would be amazing. Seems like a great place to field-test something like Click or better yet, the @tiangolo project Typer.
A template file could auto-fill with the cookie-cutter fields and allow people to run something like my-app deploy local
or my-app deploy staging --frontend
right out of the gate. People could work with a set of starter commands or configure their own custom commands and options to organize builds across environments. That'd put people really close to making a CLI for their own app functionality as well.
Maybe a Makefile is the easiest way to do that for now, but a CLI or Makefile template as part of this project or as a demo for Typer would be a useful opportunity, regardless of the docker-compose file situation.
Hi @tiangolo! Great job with FastAPI, thanks for that! I've just bootstrapped a project from this Cookiecutter and can't agree more that there are too many docker-compose files. How about using YAML anchors to avoid some of the duplication?
Thanks for the feedback everyone! :cake:
Very interesting points here :nerd_face: :coffee:
I think I found a nice balance between simplicity, deduplication, flexibility, etc.
There are now just 2 files:
docker-compose.yml
with all main configs, used by default by Docker Compose. It's the base of everything.
docker-compose.override.yml
, also used by default by Docker Compose to add override settings to docker-compose.yml
, for local development.
The combination of these 2 files is normally the "default"/"standard" in Docker Compose, so, there's no need to add extra tricks defining which Docker Compose files to use and separator in the .env
. It should also be more familiar to developers already working with Docker Compose.
All the environment files are now in a single .env
file, and it is used internally by docker-compose.yml
to inject those configs.
Some configs also read extra overrides or defaults from environment variables. Notably, the traefik-public
external network used for deployment is overridden by docker-compose.override.yml
for local development and by the tests, to make it non-external. That allows using the same file docker-compose.yml
as the base for everything, including deployment and local development, without having to add extra Docker Compose files to enable or disable depending on the case.
The tests are now run by the same container backend
. So, the backend-tests
container, Dockerfile, etc. are no longer needed :fire: . This is the first step before using the TestingClient
and adding coverage to the backend :rocket:
The generated README is also updated according to all the changes. Including the setup with Poetry, etc.
Of course, the idea is just to keep it as simple and understandable as possible, so that it's easy to get started with, even for non-experts, but it should be easy to customize and add all the extra tools on top that each project might need.
There's a new CONTRIBUTING.md
guide explaining the utility scripts to contribute to this project generator itself :memo: :nerd_face:
I finally found a way to simplify the local development of the project generator itself, so it should be easier to iterate on it.
As a side note, I'll stop trying to keep the Full Stack FastAPI Couchbase project in sync and updated (more details on that project), which will allow me to focus the little time I have available to this generator for now. :tada:
Waaaoh. I didn't expect such an impact from the docker-compose optimisation. Dropping the backend-test container is a great news. I can't wait to use again the coverage tests π
Thanks again @tiangolo , you do a tremendous job of maintening fastAPI and all its siblings. Bravo !
Thanks @ebreton ! :rocket: :smile:
I think my original concern is solved now, and there was no need for any duplication in the end :tada:
So I'm gonna close this issue now. :coffee:
Currently, there are several (I think too many) Docker Compose files.
I separate it that way to avoid all possible duplication of configurations for each environment (e.g. local development, testing, deployment to staging, deployment to production).
By reducing the config duplication that way it's possible to update a setting in a single file and then it works everywhere else.
That prevented having to keep all the configs in sync by hand. And it prevented spending time debugging "why it works on this environment but not on the other" just because one config was set in one file but not on the others.
But there are a lot of files.
I'm now thinking that there are just too many Docker Compose files.
I think it adds mental complexity, just trying to grasp the idea of what are the options and environments is already difficult.
It also means that there's not a single file that can be seen to have a general idea of the whole stack as it would be if each environment had its own single Docker Compose file with all the configurations.
Right now I'm considering merging all those Docker Compose files into a few, something like 3 or 4, at the expense of having duplication of configs in several of them and having to keep those configs in sync by hand.
What do you think?
@ebreton , @dmontagu , anyone else that sees this?