docker-archive / classicswarm

Swarm Classic: a container clustering system. Not to be confused with Docker Swarm which is at https://github.com/docker/swarmkit
Apache License 2.0
5.76k stars 1.08k forks source link

Questions #1420

Closed alexec closed 8 years ago

alexec commented 9 years ago

I'm trying to compare various solutions in the container cluster management space. I wondered if anyone could be so good as to please help me:

Your help would be really useful -- thank you!

abronan commented 9 years ago

Hi @alexec:

Basically, Swarm is the base layer of clustering, managing the docker daemons. You can use any other service on top and integrate with pretty much all the other service discovery, orchestration tools, etc.

To answer all the questions:

1 - Yes, you can use file/node discovery (see the docs on node discovery) or Consul, registrator, etcd, any other DNS based mechanisms for service discovery (wagl is one of them for example) on top of Swarm.

2 - Monitoring of containers is not baked in, reasons for that is that we support the limited subset that is the Docker Remote API. Although you can integrate it directly on top of Swarm, with tools like SysDig, etc.

3 - No it can't by itself, but it's fairly easy to listen to the events and cluster-wide informations with docker info and spin up another machine with docker-machine when no more resources are available or when those machines are running a high number of containers. In the future docker-machine can provide a high level API for us to spin up more machines and auto-scale (up or down).

4 - Yes it can, for example docker run -c 1 -m 2GB will schedule the container on a machine with 1 CPU and 2GB or Ram available. Swarm still does not support lower-upper limits for memory (docker engine supports that though), but we can improve the scheduler based on soft memory limits.

5 - No it doesn't, the reason is actually pretty simple: we don't want to infer an information that could lead to a wrong decision cluster-wise. Swarm making the default assumption that the image is not used anymore will lead to a lot of images deleted wrongfully (just before they are used again globally, and if the image is 2GB in size, it can be painful), also leading to useless pulls taking a lot of network bandwidth. Image deletion policies should be something defined by the user/admin. Providing a description that explicitly tells when an Image of a certain type (using labels for example) should be deleted and after how long. This can definitely be an improvement, but Swarm should take the decision based on a model/pattern rather than by itself using default assumptions (or if using default assumptions, those should be very conservative).

6 - You can use the Volume Plugin API supported since docker 1.9. This way you can create ZFS, GlusterFS, Ceph distributed Volumes and not care about your containers moving around as the data is scattered across hosts rather than localized on a single node.

7 - It does not by default, but you can use Swarm with other components like Keywhiz or Vault to manage the secrets. Same for Load Balancing you can use HAProxy/nginx or the excellent Interlock.

We want to keep Swarm simple to deploy and maintain. You first begin with the exact same Docker API, and if you want more services you can just deploy them on top and add more capabilities (Load Balancing, Monitoring, Service Discovery, Secrets, Storage, etc.). There are many excellent tools out there that can be either plugged as is (for example Volumes), or deployed on top. Having them by default would hinder the deployment scenario which is extremely simple with Swarm for a regular usage.

Hope this helps! Let us know if you have more questions :)

alexec commented 9 years ago

Wow! Thanks for such detailed answers to my questions. They have been incredibly useful!

abronan commented 8 years ago

Closing this one, feel free to open another issue if you run into any trouble using Swarm :)

pikeszfish commented 7 years ago

Same questions on Can Swarm schedule base on CPU or memory requirements?

https://github.com/docker/swarm/blob/master/scheduler/node/node.go#L58 https://github.com/docker/swarm/blob/v1.2.0/scheduler/strategy/weighted_node.go#L59

Looks like swarm still take Docker memory limit as node.UsedMemory. So if I have a Memory:10GB container on a machine with 2GB memory, I'm not able to create containers with Memory:>0 any more.

Any improvements since this is a 2015 issue?