vfarcic / docker-flow-proxy

Docker Flow Proxy
http://proxy.dockerflow.com/
600 stars 17 forks source link

0/0 replicas generating a lot of log #396

Closed patricjansson closed 6 years ago

patricjansson commented 6 years ago

Hi

I have a couple of containers set to 0/0 replicas. These are picked up by the proxy and outputting a lot of noise in the log, see below. Since they have been "stopped" (replicas: 0), shouldnt they be ignored by the HAProxy config?

Related proxy config: DO_NOT_RESOLVE_ADDR=true REPEAT_RELOAD=true

Versions used: listener : 17.11.11-22 proxy: 17.12.09-77

ID                  NAME                      MODE                REPLICAS            IMAGE                                                         PORTS
3yqc1ir2ktoi        kth-azure-app_web         replicated          2/2                 kthregistryv2.sys.kth.se/kth-azure-app:2.8.122_8171648        *:0->3000/tcp
6y1klq0tq3hm        kopps-dev-l2_web          replicated          0/0                 kthregistryv2.sys.kth.se/kopps:1.1.212_ae87cff                *:0->9000/tcp
7b16c1udt2qm        proxy                     replicated          2/2                 vfarcic/docker-flow-proxy:17.12.09-77                         *:443->443/tcp,*:8080->8080/tcp
9ot4yvbnkcma        projects-api_api          replicated          1/1                 kthregistryv2.sys.kth.se/projects-api:0.1.44_c02d470          *:0->3001/tcp
ev49ztlxc6by        logspout                  global              6/6                 kthse/logspout-oms:1.2.2-4
eyhv1lwvqxlx        lms-sync_api              replicated          1/1                 kthregistryv2.sys.kth.se/lms-sync:0.4.12_eb96005              *:0->3000/tcp
h0jim13naje6        kth-style-web_redis       replicated          1/1                 redis:3.2.6-alpine                                            *:0->6379/tcp
hjp13p9cy3au        kth-azure-app_redis       replicated          1/1                 redis:3.2.6-alpine                                            *:0->6379/tcp
hwu49hutp3sz        search-api_api            replicated          1/1                 kthregistryv2.sys.kth.se/search-api:0.1.6_28573f1             *:0->3001/tcp
hyc6ew136ezp        kopps-l2_web              replicated          0/0                 kthregistryv2.sys.kth.se/kopps:1.0.118_2043c16                *:0->9000/tcp
jz2a21i8n4xo        search-push-api_web       replicated          1/1                 kthregistryv2.sys.kth.se/search-push-api:0.1.49_b6b5329       *:0->3001/tcp
lewmtl9srprc        tamarack_web              replicated          2/2                 kthregistryv2.sys.kth.se/tamarack:1.4.34_af4a5c0              *:0->80/tcp
lgz8z1acqhiz        swarm-listener            replicated          1/1                 vfarcic/docker-flow-swarm-listener:17.11.11-22
o1xwivl0i9my        kopps-dev-l3_web          replicated          1/1                 kthregistryv2.sys.kth.se/kopps:1.1.212_ae87cff                *:0->9000/tcp
o5u9zqit8xib        kopps-public-l3_web       replicated          1/1                 kthregistryv2.sys.kth.se/kopps-public:1.1.24_add5648          *:0->9000/tcp
pd7tsvme26p8        office365optin_web        replicated          1/1                 kthse/office365optin:201710.10-26                             *:0->8080/tcp
phlgj8awfli6        search-web_web            replicated          1/1                 kthregistryv2.sys.kth.se/search-web:0.2.78_621b657            *:0->3000/tcp
pmnnf0msod0v        kopps-public-l2_web       replicated          0/0                 kthregistryv2.sys.kth.se/kopps-public:1.1.24_add5648          *:0->9000/tcp
q3ruv9uxu0u0        webtex_web                replicated          1/1                 kthse/webtex:1.5.2-7                                          *:0->8080/tcp
qv4aer7io8ce        kopps-public-dev-l3_web   replicated          1/1                 kthregistryv2.sys.kth.se/kopps-public:1.1.41_2e43631          *:0->9000/tcp
rhz46arr0m2u        lms-export-results_web    replicated          2/2                 kthregistryv2.sys.kth.se/lms-export-results:0.6.121_dd508b7   *:0->3001/tcp
ris71kk9u2ce        menu-web_web              replicated          1/1                 kthregistryv2.sys.kth.se/menu-web:0.1.36_b52edca              *:0->3000/tcp
s0lu6x66gdyq        innovation-web_web        replicated          1/1                 kthregistryv2.sys.kth.se/innovation-web:0.3.9_28ea0bf         *:0->3000/tcp
ss7uev3vicgc        kopps_web                 replicated          1/1                 kthregistryv2.sys.kth.se/kopps:1.1.208_4efb7fd                *:0->9000/tcp
tt11yuhe0uec        lms-api_api               replicated          1/1                 kthregistryv2.sys.kth.se/lms-api:1.2.7_cce534f                *:0->3000/tcp
v4y0n4hptv3y        studentlistor-web_web     replicated          1/1                 kthregistryv2.sys.kth.se/studentlistor-web:0.1.10_4113a50     *:0->3000/tcp
xtyf1aqwomez        kopps-public-dev-l2_web   replicated          0/0                 kthregistryv2.sys.kth.se/kopps-public:1.1.41_2e43631          *:0->9000/tcp
y9o3hocflqkh        dundret_web               replicated          1/1                 kthregistryv2.sys.kth.se/dundret:1.0.14_b3ce6d1               *:0->3000/tcp
yowy81hwzvah        innovation-api_api        replicated          1/1                 kthregistryv2.sys.kth.se/innovation-api:0.3.5_cab016e         *:0->3001/tcp
zudkzfg73rz2        kth-style-web_web         replicated          1/1                 kthregistryv2.sys.kth.se/kth-style-web:2.1.232_b0b05d6        *:0->3000/tcp

This is repeted every RELOAD_INTERVAL second in the proxys docker log:

proxy.2.w089dw162tni@service1    | The configuration file is valid, but there still may be a misconfiguration somewhere that will give unexpected results, please verify: 
proxy.2.w089dw162tni@service1    | stdout:
proxy.2.w089dw162tni@service1    | 
proxy.2.w089dw162tni@service1    | stderr:
proxy.2.w089dw162tni@service1    | [WARNING] 350/211752 (4737) : parsing [/cfg/haproxy.cfg:176] : 'server kopps-dev-l2_web' : could not resolve address 'kopps-dev-l2_web', disabling server.
proxy.2.w089dw162tni@service1    | [WARNING] 350/211752 (4737) : parsing [/cfg/haproxy.cfg:188] : 'server kopps-l2_web' : could not resolve address 'kopps-l2_web', disabling server.
proxy.2.w089dw162tni@service1    | [WARNING] 350/211752 (4737) : parsing [/cfg/haproxy.cfg:194] : 'server kopps-public-dev-l2_web' : could not resolve address 'kopps-public-dev-l2_web', disabling server.
proxy.2.w089dw162tni@service1    | [WARNING] 350/211752 (4737) : parsing [/cfg/haproxy.cfg:206] : 'server kopps-public-l2_web' : could not resolve address 'kopps-public-l2_web', disabling server.
vfarcic commented 6 years ago

You're right. I should change DFSL to send the "remove service" notification to DFP when it scales to 0. I'll start working on this in a few days.

vfarcic commented 6 years ago

A side question (not related to the need to add the new feature to DFSL)... Why do you scale services to 0 replicas?

patricjansson commented 6 years ago

Super, thanks!

On your question: Our deployment pipeline works like a cron-job, deploying docker compose files for each application (all automated). The way for our developers to stop a service is to set the replicas to zero. We could implement a service rm to invoke on replicas: 0, but we handle a permanent removal of a applications in an other way.

vfarcic commented 6 years ago

Why not rolling updates?

patricjansson commented 6 years ago

Oh not for updating. For "i do not want this api endpoint to be available for a week" use case.

remy-tiitre commented 6 years ago

scale=0 seems to be the best option to stop the service. You want the configuration to be there, just no containers running. So I have stumbled on that log pollution as well.

vfarcic commented 6 years ago

I executed a few tests (specified in https://github.com/vfarcic/docker-flow-proxy/tree/master/issues/396). The long story short is that I could not reproduce it. Can you confirm that you're running the latest DFM and DFSL? If you are, do you have any additional tip that would help me reproduce the issue?

remy-tiitre commented 6 years ago

I have to look into it, but I think the pollution does not appear when you scale your service to 0 but when you have some services scaled to 0 and then you redeploy or restart docker-flow-proxy.

vfarcic commented 6 years ago

That's it @remy-tiitre. I, finally, reproduced it. I'll work on the fix tomorrow.

vfarcic commented 6 years ago

The problem was in the Swarm Listener. It's been fixed with the tag vfarcic/docker-flow-swarm-listener:17.12.19-25. Can you please try it out and let me know whether it works as expected?

patricjansson commented 6 years ago

Hi Viktor! Great work. 17.12.19-25 did solve the 0/0 issue.