makeusabrew / decking

A Docker helper to create, manage and run clusters of containers
http://decking.io
443 stars 36 forks source link

Specify custom Tags #53

Closed robodude666 closed 9 years ago

robodude666 commented 10 years ago

Currently it seems all images are built with the tag "latest." It would be nice to be able to specify tags within the decking.json file, and even as command line overrides.

For example being able to tag a "dev" image separate from a "test" build without having to name the image differently. This would produce:

my-image:dev
my-image:test

vs

my-image-dev:latest
my-image-dev:test

Might even just produce the tag based on the cluster/group name?

The ability to override tags via command line could be used in a CI environment to name the tag after the SHA-1 hash of a git commit or a brand name, for example.

makeusabrew commented 10 years ago

Needs a bit of thought this one. Agreed that passing in a tag name via the CLI is definitely doable, since it's explicit and won't cause any surprises.

However, the automagical tag name is more complex; the building of images is entirely unrelated to cluster creation & management so there's no sane way to automatically tag image builds.

We could definitely support explicitly building tags within the decking.json manifest though; in fact I suspect we probably do, since from memory (which is hazy, admittedly) the image name is just passed verbatim to the -t parameter of decking build. Thus an image key of "my-image:foo" should build the foo tag correctly.

@robodude666; interested in the per-group / cluster tag motivation. What's your use case out of interest?

robodude666 commented 10 years ago

Unless I'm blatantly misunderstanding how decker works, you'd be able to automatically tag each build based on the cluster it was built from. From the docs example, test, dev, etc. Combined with #52, you'd be able to automatically tag & push a successful test build in your CI environment.

If you have a CI pipeline, you could then grab the test image and push it to staging and tag it as staging, and so forth to production.

I know the above pipeline isn't directly supported by decking, but you could technically have a couple of different decking.json files for each of the pipeline phases.

makeusabrew commented 10 years ago

I guess a bit like my concerns with #52 this over steps the mark of what I'd like decking to do; it's a use case which will vary too much from user to user and thus is best served as part of their own CI workflow in my opinion.

The other issue still remains in that you can't couple images with groups because the relationship doesn't work that way; groups know about containers which know about images - but images don't know anything about containers. Put another way: images are abstract, containers are concrete. Thus conceptually (to my mind at least), the desire to "automatically tag each build based on the cluster it was built from" doesn't make sense - it suggests that images are defined as and built from part of cluster definitions, which they aren't.

Conscious I might be missing something or we might be talking at cross purposes. CCing @stephenmelrose in to see if he has any thoughts (appreciate he's away at the moment!).

stephenmelrose commented 10 years ago

I agree with @makeusabrew. Although containers are built from images, they are not explicitly linked as you can have multiple containers created from an image.

If anything, I would argue tagging is a process that should be done at image build time only via an argument passed to decking build, e.g. decking build --tag dev.

makeusabrew commented 10 years ago

Yeah, that I'm up for.

robodude666 commented 10 years ago

It doesn't make sense to me to have everything automatically tagged as latest if an image is being built with a group of overrides from dev vs staging.

I should be able to do decking build all and see docker images show:

myapp:dev
myapp:staging
myapp:prod
myworker:dev
myworker:staging
myworker:prod

Frankly, I haven't experimented with having multiple groups yet so I'm not sure how decking currently handles decking build all when multiple groups are defined. I just know that with one group called dev I saw docker images show myapp:latest which simply feels wrong to me since I used dev overrides for environment variables.

stephenmelrose commented 10 years ago

The groups have nothing to do with images. Groups apply overrides to containers. Containers are created from images. No matter how many containers or group overrides you have specified in your decking.json, the images will be built the same. You may chose to have a different image based on a group, i.e. a different image for dev, but ultimately that image will be built the same regardless.

So, the tags in your example are pretty much redundant as dev, staging, and prod will be all the exact same image for myapp. What differs per environment/container is how you use that image.

I hope this helps clear things up.

makeusabrew commented 10 years ago

This is perhaps something which needs clarifying in the docs, but as @stephenmelrose points out, images and clusters / groups / containers are totally unrelated.

decking build all builds all images. Images know nothing about how they're going to be containerized. An image may be used zero, one, or multiple times in any given cluster, but there are no strict requirements here. Similarly, an image not declared in the "images" list can quite happily be used in a cluster. For example, I use numerous public images across my clusters, such as "tutum/rabbitmq" and the official "redis" image, in tandem with my own locally built images.

decking create <cluster> is what creates actual concrete implementations (containers) of these abstract blueprints (images). It is perhaps these you'd like to snapshot, which you can do (again as part of a wider workflow) by using docker commit on your actual containers.

I think if anything implicitly trying to build a different tag for every different cluster would be more confusing; most use cases will simply want to share the same basic blueprint. Take redis as an example - even if I built my own redis image locally, why would I want a different tag for each of my clusters? They'd all point to exactly the same image anyway, since any variation would be introduced (if at all) at the cluster / group level.

The other thing to bear in mind is that although a common use case for groups and clusters is as per-environment overrides, this isn't a pattern decking imposes upon users. How you define clusters, how you share containers between them, and how you override any group level configuration is totally up to you. In these cases again building tagged images doesn't make sense, since the cluster name could be "webapp" and you'd end up building images tagged as redis:webapp etc.

Hope this helps clarify things a bit. I think there is something here (and again I'm happy to be able to specify a tag when building images via the CLI), but we might just be talking at slight cross purposes.

makeusabrew commented 9 years ago

Fixed in 0.4.0 with --tag parameter :)