CloudBrewery / docrane

Docker container manager that relies on etcd to provide relevant configuration details. It watches for changes in configuration and automatically stops, removes, recreates, and starts your Docker containers.
MIT License
8 stars 2 forks source link

Allow target host configuration to be set per container #6

Open thurloat opened 9 years ago

thurloat commented 9 years ago

For docrane running on across multiple docker hosts, each has no way of knowing whether it should run a container vs another. Would end up with each host running all containers.

I propose adding another configuration to store in etcd per container configuration that specifies a hostname on which it should be run. This way, docrane can ignore containers it does not need to run.

If the hostname is not set, we should continue to see the same behaviour: it will run anything it finds, and assume it's the only process running.

Ideas?

swat30 commented 9 years ago

I like the idea, it would definitely be useful for distributed etcd clusters.

What about an etcd key that contains a list, ie. ['host1', 'host2', ..., 'hostX']?

swat30 commented 9 years ago

Or even combine the idea of running multiple containers of one "type" on a single host and where those should run. Could use a dict: {'host1': 3, 'host2': 1, ..., 'hostX': Y}

thurloat commented 9 years ago

I like the second idea, it's more explicit in definition about how many you want to run -- and where.

Another completely separate direction could be to create namespaces in etcd for each node you want to run on, and maintain separate configurations.

What about if you're running multiple containers on a single host, and each binds a local port -- how is that defined? I can see a few different ways:

Should there be a notification scheme? a.k.a docrane publishes a list of running port/containers to etcd in a different namespace so something like HAProxy can automate it? I could see this being useful for our API servers.

swat30 commented 9 years ago

Ports are generally configured in a dict, key being the host port and value being the container's port. This is passed directly to docker-py for parsing.

I think we definitely need to take your last point into consideration. A lot of people use etcd to pull data into a LB, so it would be important that we keep that in mind. That being said, is it within the scope of docrane to provide those details directly, or up to the user / a separate project? I'm not leaning one way or another, just throwing that out there :).

What about a bit of a different port configuration schema, with a string value for the container port, and a list which is used, in order, to setup as host listeners? So, if you have X containers slotted for host A and X-2 containers slotted for host B, you need a list of len(X) ports? X will always be the max amount of containers to be spun up on any of the hosts.

swat30 commented 9 years ago

Here's what I'm thinking for structure:

thurloat commented 9 years ago

data structure feels complex / overly private / internal. first suggestion is to remove the _ so you feel comfortable using it as a public API.

is it required to have the hosts registered in etcd? seems unnecessary for individual docrane clients to need to be set globally.

I'm not sure I follow the _multi_container_ports and _multi_host_ports -- aren't the container ports configured already? and can we re-use the ports data structure to do this for us, and accept multiple formats.

/docrane/container_name/ports "{'1111: 1111}" => default behaviour /docrane/container_name/ports "{'1111: [1111, 1112, 8888]}" => list means consume as above

swat30 commented 9 years ago

Cool, I like this.

I was contemplating it a bit last night.. I'm thinking that we should introduce a ContainerGroup object that manages these, since management will be slightly different than one-off containers.

thurloat commented 9 years ago

I like this!

Do you see each etcd record root gets its own local containergroup object? even if it's a one-off (single host, single proc). I'd like to see how you refactor to include the new layer. :watch:

swat30 commented 9 years ago

Yea I think that'd be the way to go. A single container could be easily converted into a group this way as well.

thurloat commented 9 years ago

Also think we should add a flag to indicate that we don't want it running anywhere (for when a service is shutdown for maintenance, etc.)

/docrane/container_name/hosts => {"*": 0}

?

swat30 commented 9 years ago

Agreed. So, even if defined explicitly, a wildcard will always override per host settings? I think that's the most logical way to make that work ^

thurloat commented 9 years ago

I would actually venture to assume that the wildcard fills initial values, and per-host numbers override those.

/docrane/web_fe/hosts => {"*": 2, "dbserver": 0} /docrane/db_backend/hosts => {"dbserver": 1}

swat30 commented 9 years ago

Only problem with that is that it makes shutting everything off a lot more painful. Maybe we have a special value for the wildcard (-1?) that puts it in maintenance mode.

thurloat commented 9 years ago

or a separate value altogether. i think both are a little non-intuitive.

/docrane/container/disabled => true | false # false / ignore by default

swat30 commented 9 years ago

I think that's best. That way they can leave their host config in tact when shutting down..