Open SirIle opened 8 years ago
Hello and thanks for the post,
Did you find a good way to do service registration in consul (or other KV store) with overlay network. I would like to update automatically my haproxy config with the virtual ip of my frontends, but can not find a way to register them as of now. I tried registrator without success ...
@MBuffenoir Yeah, I got registrator to work, more in here: http://sirile.github.io/2015/12/14/automatic-scaling-with-docker-19-and-overlay-network-locally-and-in-aws.html. It requires a version of registrator with a certain PR merged, but after that it works.
Thanks ! Is it kidibox/registrator ? btw, it seems that progrium is asking if someone had done it yet here : https://github.com/gliderlabs/registrator/pull/303
I'm doing the same thing than you, though I don't use logspout anymore, but the docker syslog logdriver... I there still an advantage in using logspout ?
I did a "docker search registrator" and just happened on kidibox/registrator as it had that PR in the comment and it seems to work. For demonstration purposes I like logspout as my logging is completely pluggable and removable without needing to restart the daemon. I haven't yet looked at the syslog driver all that much, it's on the todo-list. How does the syslog driver behave if the Logstash (or something else) isn't actively listening yet when the daemon starts?
I used syslog driver with udp, so it does not really matter if you did not start yet your ELK stack ... Haven't tried with tcp. The pluggability of logspout is a plus in testing env, for sure. I think we are in the right direction with this PR, though supporting multiple overlay might be needed at some point. The swarm is getting closer and close to production ready :-)
Is there any benefits on running one consul per host in the swarm ? I've made it with one instance on my "management" infra node. (I guess this should be a cluster in a prod env). kidibox/registrator worked perfect for me ... Thank you
The idea is to use the local Consul instance as the DNS server for that node and tie it to the local Consul bridge IP.
I am using an on premise machine to do the docker-machine stuff and launched successfully the swarm in AWS using Centos AMI. [root@test232 elastic]# docker-machine ls Error attempting call to get driver name: connection is shut down NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS infra-aws - Error Unknown unexpected EOF infras-aws - amazonec2 Running tcp://52.24.50.186:2376 v1.13.0 swarm-0-aws - amazonec2 Running tcp://35.163.53.38:2376 swarm-0-aws (master) v1.13.0
swarm-2-aws - amazonec2 Running tcp://35.167.172.143:2376 swarm-0-aws v1.13.0 swarm-3-aws * amazonec2 Running tcp://35.162.172.118:2376 swarm-0-aws v1.13.0
swarm-6-aws - amazonec2 Running tcp://35.166.82.33:2376 swarm-0-aws v1.13.1
However when I do a: [root@swarm-0-aws ~]# docker service ls Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again. [root@swarm-0-aws ~]# docker service ls Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again. [root@swarm-0-aws ~]# swarm-0-aws is the Master, is a "manager" a different concept, please advise.
Placeholder for comments.