Open SirIle opened 9 years ago
Hello,
Thanks for sharing your workflow. I was wondering, how would you handle a case where you have a consul service say "mysql.service.consul" where you register all your mysql master and slave servers. As consul run in a container, how would you connect to your mysql service from an app running in a container too, as consul is not available in the app container ? The goal would be to have in my app settings for the mysql connection :
host: "mysql.service.consul"
and in my haproxy I would have an acl with mysql.service.consul (so I don't have to hardcode the ip which may be subject to changes)
Thanks
Hi, I'm using Consul as the DNS server so that all containers can use it for service discovery. Basically the set-up you describe should work straight out of the box with the examples in the post as HAProxy shares the same DNS server for resolving the DB servers which registrator has added to Consul. HAProxy would help with port conflicts in case you want to run different mysql servers in the set-up. This is a use case I've been actually meaning to try and write about, thanks for the comment!
This post needs some updates as the sirile/haproxy image has changed.
The start example service command should now be docker run --dns 172.17.42.1 --rm sirile/haproxy -consul=consul.service.consul:8500 -dry -once
as "consul-template -config=/tmp/haproxy.json" is now the entrypoint (therefore prepended to any command you run the container with) and -consul=consul.service.consul:8500
is now the default command in place of, I presume, being in the haproxy.json file.
Thanks for pointing that out, I updated the command. You were spot on in your diagnosis.
Hi @SirIle Thank you for your work! I'm trying your project and when I run a container sirile/haproxy and config file, in section backend it has server host0 172.17.0.3:32769 check this IP of registrator container. and hello1 container have IP 172.17.0.4. I can't solve my problem. Thank you!
Please check the examples in new post with the overlay networking as things have changed considerably and the system is a bit easier now.
Turns out that the whole eco system is rather in flux, eg boot2docker becomming docker toolbox etc. I had the same issue as @greatbn and found a solution at https://github.com/gliderlabs/registrator/issues/320
You MUST specify the "Docker Host IP address" (e.g. eth0) on the registrator command line -ip
and it has to appear before the consul://host option
also the docker bridge network sits on a different ip at my windows machine running docker toolbox. What worked for me was the following:
DOCKER_IP=$(docker-machine ip $DOCKER_MACHINE_NAME)
docker run --name consul -d -h dev -p $DOCKER_IP:8300:8300 -p $DOCKER_IP:8301:8301 -p $DOCKER_IP:8301:8301/udp -p $DOCKER_IP:8302:8302 -p $DOCKER_IP:8302:8302/udp -p $DOCKER_IP:8400:8400 -p $DOCKER_IP:8500:8500 progrium/consul -server -advertise $DOCKER_IP -bootstrap-expect 1
CONSUL_DNS_IP=$(docker-machine ssh $DOCKER_MACHINE_NAME "docker inspect --format '{{ .NetworkSettings.IPAddress }}' consul")
docker-machine ssh $DOCKER_MACHINE_NAME "docker run -d -v /var/run/docker.sock:/tmp/docker.sock -h registrator --name registrator gliderlabs/registrator -ip $DOCKER_IP consul://$DOCKER_IP:8500"
docker run -d -e SERVICE_NAME=hello/v1 -e SERVICE_TAGS=rest -h hello1 --name hello1 -p :80 sirile/scala-boot-test
docker run -d -e SERVICE_NAME=rest --name=rest --dns $CONSUL_DNS_IP -p 80:80 -p 1936:1936 sirile/haproxy
after that I wondered if there was an easier, more clean setup possible, and there is:
docker run --name consul -d -h dev progrium/consul -server -bootstrap-expect 1
CONSUL_IP=$(docker-machine ssh $DOCKER_MACHINE_NAME "docker inspect --format '{{ .NetworkSettings.IPAddress }}' consul")
docker-machine ssh $DOCKER_MACHINE_NAME "docker run -d -v /var/run/docker.sock:/tmp/docker.sock -h registrator --name registrator gliderlabs/registrator -internal consul://$CONSUL_IP:8500"
docker run -d -e SERVICE_NAME=hello/v1 -e SERVICE_TAGS=rest -h hello1 --name hello1 sirile/scala-boot-test
docker run -d -e SERVICE_NAME=rest --name=rest -p 80:80 -p 1936:1936 --dns $CONSUL_IP sirile/haproxy
This will keep all the internals inside the docker bridge network.
If you want to keep an eye on consul as well, start an extra agent:
docker run --name consulweb -d -h web -p 8500:8500 progrium/consul -ui-dir=/ui -join=$CONSUL_IP
This will enable the consul web ui on http://$DOCKER_IP:8500/ui
to which you can connect using a browser.
Also the template refers to fields not present in consul
2016/03/12 17:58:07 [WARN] ("key(service/haproxy/maxconn)") Consul returned no data (does the path exist?)
are these to be set by the agent, or should the fields refer to the KV store of consul ?
Cheers, Hans
There is a typo in haproxy.ctmpl
line 19:
bind \÷*:80
@kklepper Thanks for pointing out, corrected, that's slipped in at some stage.
Great post ! Congrats & thanks for sharing ! Regarding load balancing, I've published a post: https://promesante.github.io/2.... I've taken as guidance this post, and other similar ones; all of them listed there as references. Instead of managing the cluster manually, there it is scheduled by means of Hashicorp's Nomad. Hope it helps. Cheers !
Hi, I followed the example tutorial that you have provided. But when I was trying to do with my micro-service example (which runs on port 7875 in AWS server) it doesn't work. It requires a port to run. How do I configure sirile/haproxy image for my port. P.S: I tried to configure it using etc/haproxy/haproxy.cfg. But it didn't work. My server side OS is centos. @SirIle better you could be able to help me to solve this.
I followed your tutorial but without docker, ie I set up dummy service nodes on some ports and set up consul and consul template. It is working up to the point that consul template can automatically change the destination file. However, the command = "haproxy -f /etc/haproxy/haproxy.cfg -sf $(pidof haproxy) &"
is of problem. It seems it can't do the command, returning error `failed to execute command "haproxy -f /root/Documents/consulProto/haproxy.conf -sf $(pidof haproxy)" from "haproxy.conf.ctmpl" => "haproxy.conf": child: command exited with a non-zero exit status:
haproxy -f /root/Documents/consulProto/haproxy.conf -sf $(pidof haproxy)
This is assumed to be a failure. Please ensure the command exits with a zero exit status.`
Any thoughts on how to fix?
My first guess would be an error in the haproxy template. Check the haproxy logs.
There is a bug in consul-template that does not handle pidof command correctly, since on the first run this pidof returns '' and consul-template misinterprets it (source: https://github.com/hashicorp/consul-template/issues/950). As suggested on that link, you can circumevent this problem by wrapping the call in a shell call, e.g:
command = "/bin/sh -c '/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -sf $(pidof haproxy) &'"
My services don't have anything mapped in the root of the service ie, API location is in 127.0.0.1:8080/api/v1 and 127.0.0.1:8080/ will return a 404. This is making the health check fail while using the current Consul template. Is there any way to avoid this failure because 404 is not a service down in this case
@SirIle Thanks for sharing the post and right now I am implementing your code, but when I run HAProxy I am getting error as shown below.
reason: Layer4 connection problem, info: "Connection refused" Can anyone help me with this issue.
Cheers, Vikas
Bit late, but I'd like to point out that HAproxy doesn't support UDP. We are using nginx exactly for that reason. Nginx supports plain L4 tcp and udp, not only http as you stated.
Placeholder issue