docker-flow / docker-flow-proxy

Docker Flow Proxy
https://docker-flow.github.io/docker-flow-proxy/
MIT License
318 stars 188 forks source link

intermittent 503 errors #57

Closed wholenewstrain closed 6 years ago

wholenewstrain commented 6 years ago

Typing it to help others and maybe get this updated in the doc or improved in the code.

I followed https://proxy.dockerflow.com/swarm-mode-auto/ and all worked great. Then I tried something of my own and ran into 503 problems quickly. Note: my proxy network is called proxy-network instead of just proxy to make it easier to comprehend for me. I have 2 nodes using latest dfp image.

docker service create --name proxy -p 80:80 -p 443:443 --network proxy-network --replicas 2 \ -e LISTENER_ADDRESS=swarm-listener \ dockerflow/docker-flow-proxy

docker service create --name groovy-jetty -t --mount type=volume,source=webServer-rw,destination=/mnt/webServer \ --network go-demo --network proxy-network --replicas 2 --label com.df.notify=true \ --label com.df.port.1=9080 \ --label com.df.srcPort.1=80 \ --label com.df.port.2=9443 \ --label com.df.srcPort.2=443 \ --label com.df.distribute=true \ --label com.df.reqMode=tcp \ --label com.df.checkTcp=true \ --label com.df.connectionMoode=http-tunnel \ my-images/docker-groovy-with-jetty:0.1

With this setup I was getting intermittent 503 errors (definitely not production qualiy). So I started poking around. And looked at HAProxy config. Note: the similar 503 errors were happening with http mode too when I used just 80 port (lost the details) but I needed tcp so that server can present its own cert (this is a temporary setup).

docker ps Gave me the proxy container id

docker exec -ti 7b79959ae9c6 /bin/ash /$ less /cfg/haproxy.cfg

Turned out there were 3 frontend entries:

frontend services bind :80 bind :443 mode http option forwardfor

frontend tcpFE_443 bind *:443 mode tcp default_backend groovy-jetty-be9443_2

frontend tcpFE_80 bind *:80 mode tcp default_backend groovy-jetty-be9080_1

Doing netstat -atnp confirmed my suspicion:

tcp 0 0 0.0.0.0:80 0.0.0.0: LISTEN 5708/haproxy tcp 0 0 0.0.0.0:80 0.0.0.0: LISTEN 5708/haproxy tcp 0 0 0.0.0.0:443 0.0.0.0: LISTEN 5708/haproxy tcp 0 0 0.0.0.0:443 0.0.0.0: LISTEN 5708/haproxy

Why OS allows that I don't know (I suspect its bc it is the same pid) but with this setup services listener is getting ~50% of the connections (depending which of the listeners grabs the next connection) and it has no backend hence intermittent 503 errors.

The workaround is to change default ports for proxy service (they aren't published so there is no harm):

-e DEFAULT_PORTS=81,444 \

Confirmed with below command showing zero 503 errors.

httperf --server=serverIP --port=80 --uri=/ --num-conns=300 --num-calls=10|grep -E "test-duration|5xx"

I believe there should be some logic added to prevent this kind of config created by dfp but I will leave this to the discretion of the maintainer as I'm just a noob :) Hope this helps someone!!

thomasjpfan commented 6 years ago

Thanks for raising this issue!

We can include a safe guard where if all services are TCP or SNI, then the http binding of ports :80 and :443 are not included in the haproxy config. What do you think?

wholenewstrain commented 6 years ago

Agreed, since sni implies tcp and srcPort is required for tcp then that would make perfect sense. Thanks for looking into it, I hope it will help others I've seen lots of pages referring to 503 errors when using haproxy and/or dfp when I tried to figure this out. I believe people (me included) just assume a process binds to a port just once but it seems to not be the case LOL.

thomasjpfan commented 6 years ago

I included the safe guard in dockerflow/docker-flow-proxy:18.09.04-5. If all services are tcp/sni, then then the default ports are not added.

Note, if there are any http services, both 80 and 443 ports will be added. DEFAULT_PORTS would need to change if a tcp/sni service wishes to use 80 or 443. For backwards compatibility, I did not include anything smarter, like selecting ports based on srcPort.