Closed ryantaplin closed 7 years ago
So it looks like to establish load balancing I first need to set up individual hosts (aka nodes) (something that has a public IP) either a virtual machine or another physical machine.
Once I have setup multiple nodes I must specify one of them to be the manager node - from here I will setup the swarm using the 'docker swarm' command;
docker swarm init --advertise-addr _[MANAGER_IP]_
I will then have to add all of the other worker/manager nodes to the given swarm using the 'docker swarm join' command; the --token flag defines the role of the node (either worker/manager); the length and uniqueness of the --token for the worker node is unknown to me at this time.
docker swarm join \
--token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
192.168.99.100:2377
Compared to the simple --token that is for registering a new manager.
docker swarm join-token manager
aka (may be related to this piece of info - just obtained) 'docker swarm join-token worker' and the equivalent for manager.
Running the command 'docker node ls' inside the node manager will display the current state of the swarm and the nodes inside it.
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
dxn1zf6l61qsb1josjja83ngz * manager1 Ready Active Leader
Some confusion between running a service and running the container. Is this the same equivelant except services give the ability to distribute and scale? In one post here they reference services and containers interchangeably but no definition of what the difference is.
So reading around various articles; it seems that docker service is the new docker run with cool extra features that allow you to distribute and scale replicas accordingly. Docker run is probably more viable for dev but for long-term sustainable you should use docker service.
It does not specify that you need more than one node in a swarm to run docker service so you could just set up your physical machine to run a swarm with itself inside as the manager to get this working initially. My thoughts are that it will work fine and instead of having distributed containers they will all sit on the same physical machine - this just means there is slight risk to the application (if this one node goes down then the application is inaccessible). So is it not better to make all worker nodes by default manager nodes so that if your one manager node goes down there the other N other manager nodes that will be able to take over management?
Will pick this up further another time.... Link to continue from: https://docs.docker.com/engine/swarm/swarm-tutorial/deploy-service/
Couldn't see an -i flag equivalent for 'docker service create', --net was substituted for --network however swarms can't use bridges. Need to go back a revise this network to create an overlay.
docker service create --replicas 3 --name app1 --network=testbridge -e ENVIRONMENT=docker -e ROLE=READ -p 8081:8080 -d -t myapp
So I created a service as above. The service kept falling over due to database connection being invalid. I may need to reconsider the user-defined overlay and how I can resolve the databse ip by container name.
I tried adding mysql service to the node but that didn't have any effect. Maybe some further config needed.
This is working fine now; I think it was just an issue with the initial loading of MySQL taken so long as previously identified.
Setup docker swarm using the following command. This will make the host you run this command on the manager. Having just one host (e.g. your main computer) is fine.
docker swarm init
Create a docker overlay network. Swarm services do not support bridge type networks.
docker network create -d overlay --subnet=10.10.10.0/24 testoverlay
Create a docker service; this is swarms equivalent of doing 'docker run'. You can't use bridges for services; you must create an network overlay.
docker service create --name mysql --replicas 1 --detach --network=testoverlay -p 3306:3306 -e MYSQL_ROOT_PASSWORD=pass -d mysql
docker service create --replicas 3 --name app1 --network=testoverlay -e ENVIRONMENT=docker -e ROLE=READ -p 8081:8080 -d -t myapp
docker service create --replicas 3 --name app2 --network=testoverlay -e ENVIRONMENT=docker -e ROLE=WRITE -p 8082:8080 -d -t myapp
Using '**docker ps**' shows me all of containers running under service (if assigned to this given host).
Using '**docker service ps SERVICE_NAME**' shows the services running / terminated/ terminating for a given service.
Using '**docker service logs SERVICE_NAME**' i think shows a combination of logs for all services in one.
Using '**docker service inspect SERVICE_NAME**' shows info of the service.
Using '**docker service scale SERVICE_NAME=NUMBER**' change number of instances of this service to given number
Using '**docker service rm SERVICE_NAME**' deletes service.
Yesss it works fully; as edited in previous post.....
Interesting read - identifying the differences between docker swarm vs Kubernetes. https://www.upcloud.com/blog/docker-swarm-vs-kubernetes/
It seems that docker swarm is quick and easy for setup and configuration but limits your in terms of configuration and api.
Kubernetes allows this extra freedom but it comes with complexity of setup and configuration.
General consensus is to stick with docker swarm for development - investigate Kubernetes when happily settled with docker swarm and want to expand - e.g. production use.
Go through swarm mode tutorial; it seems I need to wrap an instance of my container inside a node and then assign it to a swarm. The swarm will load balance the incoming requests between the nodes and forward them on to the container.
How is accessibility to a single instance of MySQL container over different nodes? In theory if they are all on my user-defined network I'd assume they should manage communication perfectly fine.
This is all theoretical judgement pre-delving into the tutorial. Will follow up with answers afterwards.
https://docs.docker.com/engine/swarm/swarm-tutorial/