docker-archive / dockercloud-haproxy

HAproxy image that autoreconfigures itself when used in Docker Cloud
https://cloud.docker.com/
651 stars 187 forks source link

Doesn't work with Docker 1.13 #157

Closed zerowebcorp closed 7 years ago

zerowebcorp commented 7 years ago

Docker 1.13 released today and I thought lets update it to enjoy the latest features. After updating to 1.13 all of my domains went down and I spent almost half the day debugging issue and finally reverted back to Docker 1.12 destroying all my services. So what I found that with Docker 1.13 when you create a service with --publish to publish the port to host node, the containers are getting attached to the 'ingress' network and dockercloud-haproxy is detecting the container's IP as the IP address in the 'ingress' network instead of the 'proxy' network in which the haproxy is attached to. Hence it cannot route to the correct container and haproxy is complaining that the containers are down.

I also tried to attach ingress to haproxy service but didn't work for some reason.

Steps to replicate

Docker version 1.13.0, build 49bf474

To confirm this is due to --publish flag,

In addition to this, I have also tried to add --network ingress to the haproxy service to see if that works, but it didn't.


/ # cat haproxy.cfg 
global
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice
  log-send-hostname
  maxconn 4096
  pidfile /var/run/haproxy.pid
  user haproxy
  group haproxy
  daemon
  stats socket /var/run/haproxy.stats level admin
  ssl-default-bind-options no-sslv3
  ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA:DES-CBC3-SHA
defaults
  balance source
  log global
  mode http
  option redispatch
  option httplog
  option dontlognull
  option forwardfor
  timeout connect 120000
  timeout client 120000
  timeout server 120000
listen stats
  bind :1936
  mode http
  stats enable
  timeout connect 10s
  timeout client 1m
  timeout server 1m
  stats hide-version
  stats realm Haproxy\ Statistics
  stats uri /
  stats auth stats:stats
frontend port_80
  bind :80
  maxconn 4096
  acl is_websocket hdr(Upgrade) -i WebSocket
  acl host_rule_1 hdr_reg(host) -i ^.*$
  acl host_rule_1_port hdr_reg(host) -i ^.*:80$
  acl path_rule_1 path -i /health
  use_backend SERVICE_healthcheck if path_rule_1 host_rule_1 or path_rule_1 host_rule_1_port
backend SERVICE_healthcheck
  server healthcheck.1.y7n8ovlap4rq6urra4ffbs80p 10.255.0.8:80 check inter 2000 rise 2 fall 3/ # 
tifayuki commented 7 years ago

@getvivekv Thank you for reporting the issue. I will look into it ASAP.

zerowebcorp commented 7 years ago

@tifayuki Thank you! I just learned that as part of 1.13 release, I can set the --publish mode to host. When I tried it, haproxy did read the IP correctly but my guess is that the publish only works at the host level. Can't find any documentation on this on docker.com

docker service create --env VIRTUAL_HOST=*/health -e SERVICE_PORTS="80" --name healthcheck --network proxy --publish mode=host,target=80,published=8080,protocol=tcp dockercloud/hello-world

works

docker service create --env VIRTUAL_HOST=*/health -e SERVICE_PORTS="80" --name healthcheck --network proxy --publish mode=ingress,target=80,published=8080,protocol=tcp dockercloud/hello-world

didn't work

tifayuki commented 7 years ago

I am not sure if it is directly related to docker 1.13 or not. The problem I observed is that when you publish any port on your application service, your service will be attached to ingress network.

In such a case, haproxy and you app have two networks in common: ingress and proxy. The script simply picks up and use the first IP address it meets(in your case, it is always the IP in the ingress network)

If you don't publish any port of you application service, everything should work as expected.

Also, if you publish haproxy on host, the only network that in common is proxy, and this is why it gives you the correct IP.

To fix this, I can add some logic to not use any IP from ingress, and it should solve the issue.

zerowebcorp commented 7 years ago

I guess the proposed solution would work. Also I'm finding that there are so many changes introduced in docker 1.13 that are causing normal PHP applications to break. PHP server variables such as SERVER_ADDR gives incorrect IP address in docker 1.13. I guess the underlying networking change is affecting docker loud proxy as well. I'll report that issue to docker

virtuman commented 7 years ago

confirming, had exact same issue, and found your post right after figuring the issue out on my own.. took me a good day or two to pinpoint the issue on my own .. wish I ran through the issue list first