Open dustymabe opened 8 years ago
I have some thoughts on the 3rd scenario. We sort of talked about this yesterday. Hopefully I'm not bike-shedding.
I agree with the specification of the 3rd scenario as you stated it. I would say that the following should also result in an ansible
build:
webui:
image: mywebui:latest
build:
from: centos:7
If we see a build
directive that includes a from
, then the default strategy will be ansible
, and the playbook will default to main.yml
.
We also talked about adding a build configuration section to container.yml
. It would make the build
command a bit more convenient and a project more sharable. If I want to share my project with someone, how will they construct the build
and run
commands? I can document of course, but having a config section that comes with reasonable defaults that I can tweak would be more convenient.
I propose we add the following top-level section for the ansible
strategy:
ansible_build:
playbook: # playbook override
image: # custom build image
volumes:
# list any volumes
command: # command override
working_dir: # working dir override
environment:
# list any environment vars
The properties in this section will match the ansible-container
service properties we use today, and anything found in this section will override the defaults we have coded into the current compose template for the build
command.
I agree with the specification of the 3rd scenario as you stated it. I would say that the following should also result in an ansible build:
webui: image: mywebui:latest build: from: centos:7
If we see a build directive that includes a from, then the default strategy will be ansible, and the playbook will default to main.yml.
The problem with this is that it doesn't consider any other possible build strategies. What if there is a "chef" build strategy that is developed that someone wants to use? My personal preference is that we be more explicit here. The only reason we wouldn't require "strategy" to be provided for dockerfile
is because we want to maintain compatibility for people bringing a docker-compose file from another project and having it just work.
Well, this is Ansible Container after all, so I don't think it's wrong to assume a default build strahteegery of Ansible.
Well, this is Ansible Container after all, so I don't think it's wrong to assume a default build strahteegery of Ansible.
absolutely. not wrong, but could be nice to not cause problems down the line. I'm just trying to highlight that. we can choose to make it the default if that is what we prefer
I'm always happy to reconsider the container.yml format, and I think you raise some really good concerns and opportunities for how we can improve it, however, I'm also very strongly guided by the desire to keep things as simple as possible. And as such, I've got some issues with the direction you're suggesting.
1) In your specification, the image
directive has an ambiguous meaning. In the present spec, image
means "start from here". With what you've suggested, in different circumstances, it means the current meaning or it means "save the image here." I dislike that ambiguity.
2) I'm very -1 on playbook name override, and I've noted as much on #207 and #216. I want to encourage the best practice of keeping your Ansible playbook and container.yml in VCS alongside your application's code. I don't want to encourage storing them separately.
3) I'm entirely in favor of being able to override the default naming and tagging of built images, as being tracked in #125. I'd be fine with there being a per-service container.yml
directive that contained an overridden name. I'm not sure the best way to override the version tagging, and I'm very open to suggestions.
As for the other suggestions, they can presently be accommodated through many other means besides altering the schema. Environment variables during build can be specified in the main.yml
, in command-line arguments to ansible-container build
, in include files, etc. You can build from a Dockerfile or an Ansible Playbook or both without having to explicitly state what "build strategy" you're using. I like the simplicity of a single playbook being used during the build process, versus multiple, and if you're finding yourself wanting to use multiple playbooks, you should probably try reorganizing that code into roles.
Happy to continue this discussion. Thanks, Dusty!
1) In your specification, the image directive has an ambiguous meaning. In the present spec, image means "start from here".
only if you are doing a build using "ansible". What does it mean in the case where you aren't doing a build at all, or the case where you are using a Dockerfile and context dir?
With what you've suggested, in different circumstances, it means the current meaning or it means "save the image here." I dislike that ambiguity.
I don't think making the top level image attribute represent "start from here" for a build is very intuitive for the end user. They are used to that being the image that is used during run time, not build time. Re-using it as the equivalent of a "FROM" line in a Dockerfile is confusing.
2) I'm very -1 on playbook name override, and I've noted as much on #207 and #216. I want to encourage the best practice of keeping your Ansible playbook and container.yml in VCS alongside your application's code. I don't want to encourage storing them separately.
I don't really think allowing the playbook to be specified differently would necessarily mean that people don't store it in VCS. Either way, it would be nice to give them flexibility to store things in the files they want to. i.e with this approach they can name different playbooks for different containers within the application.
3) I'm entirely in favor of being able to override the default naming and tagging of built images, as being tracked in #125. I'd be fine with there being a per-service container.yml directive that contained an overridden name. I'm not sure the best way to override the version tagging, and I'm very open to suggestions.
Any example container.yml
contents you could share to illustrate your point?
As for the other suggestions, they can presently be accommodated through many other means besides altering the schema. Environment variables during build can be specified in the main.yml, in command-line arguments to ansible-container build, in include files, etc. You can build from a Dockerfile or an Ansible Playbook or both without having to explicitly state what "build strategy" you're using. I like the simplicity of a single playbook being used during the build process, versus multiple, and if you're finding yourself wanting to use multiple playbooks, you should probably try reorganizing that code into roles.
I see. Yeah i was discussing this with chris and we both liked not having to either make the CLI more complicated with a bunch of arguments or having to use env vars for everything. I guess this is a matter of opinion.
Hey @j00bar,
1) In your specification, the image directive has an ambiguous meaning. In the present spec, image means "start from here". With what you've suggested, in different circumstances, it means the current meaning or it means "save the image here." I dislike that ambiguity.
This is how it currently the build
directive works with Docker Compose v2, so if we're to be compatible with that, it should work.
2) I'm very -1 on playbook name override, and I've noted as much on #207 and #216. I want to encourage the best practice of keeping your Ansible playbook and container.yml in VCS alongside your application's code. I don't want to encourage storing them separately.
+1 on keeping the playbook in the VCS, but maybe I have a directory with multiple playbooks, I just want to use one of those; and making the directory structure like ansible/main.yml
mandatory is not very convenience, since I have to copy and rename each one of those. This is also a very requested feature in #161 and #76.
3) I'm entirely in favor of being able to override the default naming and tagging of built images, as being tracked in #125. I'd be fine with there being a per-service container.yml directive that contained an overridden name. I'm not sure the best way to override the version tagging, and I'm very open to suggestions.
Well, the same, if we're going to have to support Compose v2, image
and build
come free.
You can build from a Dockerfile or an Ansible Playbook or both without having to explicitly state what "build strategy" you're using.
Yep, makes sense at the moment since we have only 2 build strategies, but if we're planning to support alternate container runtimes (acbuild
- #228) or some other way of building container images, we need something like a build strategy
in the container.yml
with intelligent defaults.
Thoughts?
@dustymabe @containscafeine @chouseknecht Thank you for the rich discussion and brainstorming here. This has been tremendously helpful and I hope that continues. Here's what I'm sold on and what I'm resistant to...
1) I'm now +1 on the build:
key.
2) I'm now +1 on the from:
key inside of build.
3) I'm now +1 on the image:
key inside of build as the image name to save as. The version tag can be specified as a variable and processed using Jinja2 syntax if it needs be dynamic.
4) I'm -1 on dockerfile:
. I'm perfectly fine with it being a happy accident that Dockerfile can be used with the Ansible-Container docker
engine implementation. I think it's counterproductive to either actively code around supporting Dockerfile or around disallowing it. Additionally, Ansible Container was never meant to be Docker specific.
5) I'm -1 on strategy:
. This is why Ansible Container has the --engine
argument, to allow for different possible engine implementations.
6) I'm -1 on the context:
and args:
keys in Docker Compose v2 spec inside of build:
Additional notes on the above:
1) Absent the build:
key but present an image:
key, the image:
key will be treated as build: from:
, as is presently supported.
2) The presence of image:
and build: from:
will throw an error.
Based on this: https://github.com/ansible/ansible-container/issues/143#issuecomment-246092730 I'm intrigued on the multiple playbooks as multiple layers, wanting to run later playbooks while keeping results from earlier ones. I'm still against supporting running playbooks not kept in the project's VCS> I still wish to support roles as layers, but this use case also strikes me as reasonable. However, there's a UX/design detail I haven't come up with a good idea for yet. Let's say....
1) I run ansible-container build playbook1.yml playbook2.yml playbook3.yml
, which results in an image with 3 new layers.
2) I want to re-run only playbook3.yml
.
How would anisble-container build playbook3.yml
know what hash resulted from the commit after playbook2.yml
from before?
Ideas welcome. Thanks!
I'm +1 on building a schema validator for container.yml
. And just like Docker compose, if it encounters a directive or attribute that is not whitelisted, it throw an exception. There shouldn't be ambiguity around what's supported and what's not.
Hey @j00bar. We were hoping to reach a wider audience by having ansible-container support different build strategies, but we understand wanting to focus on and meet the needs of the ansible community.
Thanks for your reply.
Any updates?..
BTW: using a Dockerfile as an alternative is really good idea, since overwise you will have a "vendor lock"
https://github.com/ansible/ansible-container/issues/906#issuecomment-377247341
indeed, I'm using a docker file to build my own custom conductors, as the official documentation says here :
https://docs.ansible.com/ansible-container/conductor.html#baking-your-own-conductor-base
There are a few issues I'd like to discuss.
I'd like to introduce the concept of a "build strategy". Currently there would exist two "build strategies":
dockerfile
, andansible
. The default build strategy would bedockerfile
if a build context path was given.Here are 3 scenarios:
1st
For db service in the 1st scenario, no build would occur and the postgres image would be used only at runtime (i.e. no image needs to be created).
2nd
For the api service in the 2nd scenario, we are using the
dockerfile
build strategy because we passed in the path to a directory where the Dockerfile lives. The build would execute and the resulting container image would be tagged withmyapi:latest
. A user could optionally specifystrategy: dockerfile
here if they wanted, but it is implied so the software should take care of setting that. NOTE: this case is completely compatible with docker-compose.3rd
For the webui service we set a build strategy of
ansible
and specify we want to use thecentos:7
image as the starting image. We also state that we want to run the playbook in./mywebui.yml
against the container before saving it off under themywebui:latest
tag.Thoughts?