ehazlett / interlock

Docker Event Driven Plugin System
Apache License 2.0
976 stars 130 forks source link

Interlock removing network config of nginx container. #149

Closed tomskip123 closed 8 years ago

tomskip123 commented 8 years ago

I think this may be a replication of #88

here is the network settings of my nginx container

 "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "8e969c800a197435eb3bd8b06c36e30c19873247a003881e9a821eff5561ec74",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "443/tcp": null,
                "80/tcp": [
                    {
                        "HostIp": "54.237.215.17",
                        "HostPort": "80"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/8e969c800a19",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {}
        }

Networks is empty... so it doesn't work with my overlay network. so i have to keep reconnecting it to the network.

then i see this entry in interlock logs

interlock_1 | DEBU[0001] disconnecting proxy container from network: id=LONG_STRING_ID net=code_toolkit-net ext=lb

why is this doing this? it's annoying, am i doing this incorrectly or is there an option in interlock to add the proxy to a certain network every time it reloads it.

this is my docker-compose...

version: "2"

services:
  interlock:
      image: ehazlett/interlock:1.1.0
      command: -D run
      tty: true
      ports:
        - 8080
      environment:
        INTERLOCK_CONFIG: |
            ListenAddr = ":8080"
            DockerURL = "${SWARM_HOST}"
            TLSCACert = "/var/lib/boot2docker/ca.pem"
            TLSCert = "/var/lib/boot2docker/server.pem"
            TLSKey = "/var/lib/boot2docker/server-key.pem"
            [[Extensions]]
            Name = "nginx"
            ConfigPath = "/etc/nginx/nginx.conf"
            TemplatePath = "/etc/interlock/nginx.conf.template"
            PidPath = "/var/run/nginx.pid"
            MaxConn = 1024
            Port = 80
      volumes:
        - /etc/docker:/var/lib/boot2docker:ro
      networks:
        - toolkit-net

  nginx:
      image: nginx
      entrypoint: nginx
      command: -g "daemon off;" -c /etc/nginx/nginx.conf
      ports:
         - "80:80"
      labels:
        - "interlock.ext.name=nginx"
      networks:
        - toolkit-net

  app:
     image: tutum/apache-php
     ports:
       - 80
     labels:
       - "interlock.hostname=test"
       - "interlock.domain=local"

networks:
  toolkit-net:
      driver: overlay

once again this could be a replication of #88 but i just wanted to be sure.

ehazlett commented 8 years ago

So this happens when you use Interlock in the overlay mode. Basically it checks to see if there are any containers bound to that network and removes the proxy so the network can be removed if desired. This looks like a bug as this shouldn't be happening unless you use the interlock.network label. Thanks for reporting!

demaniak commented 8 years ago

May or may not be related, but I know that nginx-based project of Mr Wilder has a similar appearing issue. See https://github.com/jwilder/nginx-proxy/issues/304

demaniak commented 8 years ago

Just to be clear - does this mean nginx (and haProxy) is not really feasible when using overlay network?

ehazlett commented 8 years ago

No they are completely fine to use with overlay. I haven't been able to reproduce. I need to dig some more. On May 13, 2016 9:07 AM, "Hendrik Coetzee" notifications@github.com wrote:

Just to be clear - does this mean nginx (and haProxy) is not really feasible when using overlay network?

— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/ehazlett/interlock/issues/149#issuecomment-219037391

demaniak commented 8 years ago

@ehazlett I can verify that this issue does happen.

My setup is:

So, when the whole shebang starts up (about 8 services), nginx is non responsive. Inspecting the network with docker exec c29f5cd2257d ip addr yields:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

Ok, so then I scale nginx service down to 0, and back up to 1, and we check the network setup again. Then we have:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
414: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 02:42:0a:09:00:06 brd ff:ff:ff:ff:ff:ff
    inet 10.9.0.6/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fe09:6/64 scope link 
       valid_lft forever preferred_lft forever
416: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:12:00:07 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.7/16 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe12:7/64 scope link 
       valid_lft forever preferred_lft forever

And as expected, nginx response with default landing page (not my web entry point sadly :( )

Ok, so then I scale the web service back to 0,and then back to 1, interlock does it's thing, aaaand nginx is dead again. Inspect networking again, everything is gone except loopback.

ehazlett commented 8 years ago

Ok Thanks. Can you share the compose file you are using? I'm trying to find what networks are define for each service.

On Sat, May 14, 2016 at 11:30 AM, Hendrik Coetzee notifications@github.com wrote:

@ehazlett https://github.com/ehazlett I can verify that this issue does happen.

My setup is:

  • multihost swarm, with overlay networking
  • interlock+nginx
  • docker-compse (goes without saying I suppose)
  • externally created overlay network (10.9.0.0/16)

So, when the whole shebang starts up (about 8 services), nginx is non responsive. Inspecting the network with docker exec c29f5cd2257d ip addr yields:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever

Ok, so then I scale nginx service down to 0, and back up to 1, and we check the network setup again. Then we have:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 414: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether 02:42:0a:09:00:06 brd ff:ff:ff:ff:ff:ff inet 10.9.0.6/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:aff:fe09:6/64 scope link valid_lft forever preferred_lft forever 416: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:12:00:07 brd ff:ff:ff:ff:ff:ff inet 172.18.0.7/16 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::42:acff:fe12:7/64 scope link valid_lft forever preferred_lft forever

And as expected, nginx response with default landing page (not my web entry point sadly :( )

Ok, so then I scale the web service back to 0,and then back to 1, interlock does it's thing, aaaand nginx is dead again. Inspect networking again, everything is gone except loopback.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/ehazlett/interlock/issues/149#issuecomment-219235790

demaniak commented 8 years ago

Overlay network created with docker network create --driver overlay --subnet=10.9.0.0/16 dev-net

Had to clean up the compose file a bit to protect the innocent and the guilty alike, it should all be consistent still though (I hope). Also, nothing pertinent was removed. The consul discovery container is started up outside of the swarm, and as far as I can tell is doing what it should be doing.

Hope it helps - I'm going to switch to haProxy quickly and see if the same thing happens.

version: '2'
services:
 web:
  image: our.private.repo/systemz-web
  ports:
     - 8080
  restart: on-failure
  extra_hosts:
    - "discovery:10.0.0.130"
  environment:
    - spring.cloud.consul.host=discovery
    - spring.cloud.consul.port=8500
    - spring.profiles.active=prod
    - "constraint:node==dts"
  labels:
        - "interlock.hostname=dts"
        - "interlock.domain=some.domain"
  logging:
     driver: json-file
     options:
       max-size: "16m"
       max-file: "10"
       labels: "web"

 gizmo-api:
  image: our.private.repo/systemz-gizmo-api
  expose:
   - "8080"
  restart: on-failure
  extra_hosts:
   - "discovery:10.0.0.130"
  environment:
   - spring.profiles.active=prod
   - spring.cloud.consul.host=discovery
   - spring.cloud.consul.port=8500
  logging:
     driver: json-file
     options:
       max-size: "16m"
       max-file: "10"
       labels: "gizmo-api"

 payment-api:
  image: our.private.repo/systemz-payment-api
  expose:
   - "8080"
  restart: on-failure
  extra_hosts:
   - "discovery:10.0.0.130"
  environment:
    - spring.profiles.active=prod
    - spring.cloud.consul.host=discovery
    - spring.cloud.consul.port=8500
  logging:
     driver: json-file
     options:
       max-size: "16m"
       max-file: "10"
       labels: "payment-api"

 user-service:
  image: our.private.repo/systemz-user-service
  expose:
   - "8080"
  extra_hosts:
   - "discovery:10.0.0.130"
  depends_on:
    - "widget-api-gateway"
    - "user-db"
  links:
   - user-db:user-store
   - data-service:data-service
  restart: on-failure
  environment:
   - spring.profiles.active=prod
   - spring.cloud.consul.host=discovery
   - spring.cloud.consul.port=8500
  logging:
     driver: json-file
     options:
       max-size: "16m"
       max-file: "10"
       labels: "user-service"

 user-db:
   image: postgres
   restart: on-failure
   environment:
     - POSTGRES_PASSWORD=lkjo34(^%TFdjfhldfdf22
   volumes:
     - user-db-data:/var/lib/postgresql/data/

 data-service:
   image: our.private.repo/systemz-data-service
   expose:
      - "8080"
   restart: on-failure
   privileged: true
   extra_hosts:
     - "discovery:10.0.0.130"
   environment:
     - spring.cloud.consul.host=discovery
     - spring.cloud.consul.port=8500
     - spring.profiles.active=prod
   logging:
     driver: json-file
     options:
       max-size: "16m"
       max-file: "10"
       labels: dcc3

 widget-api-gateway:
   image: our.private.repo/widget-api-gateway
   restart: on-failure
   privileged: true
   environment:
     - spring.cloud.consul.host=discovery
     - spring.cloud.consul.port=8500
     - spring.profiles.active=prod
   extra_hosts:
     - "services.compa.compb:192.168.49.10"
     - "discovery:10.0.0.130"
   logging:
    driver: json-file
    options:
      max-size: "16m"
      max-file: "10"
      labels: thales

 interlock:
    image: ehazlett/interlock:master
    command: run -c /etc/interlock/config.toml
    tty: true
    ports:
        - 8080
    privileged: true
    environment:
      INTERLOCK_CONFIG: |
        ListenAddr = ":8080"
        DockerURL = "tcp://10.0.0.130:3376"
        TLSCACert = "/etc/docker/ca.pem"
        TLSCert = "/etc/docker/server.pem"
        TLSKey = "/etc/docker/server-key.pem"

        [[Extensions]]
        Name = "nginx"
        ConfigPath = "/etc/nginx/nginx.conf"
        PidPath = "/var/run/nginx.pid"
        TemplatePath = ""
        MaxConn = 1024
        Port = 80
        NginxPlusEnabled = false
    volumes:
         - /etc/docker:/etc/docker

 nginx:
    image: nginx:latest
    entrypoint: nginx
    command: -g "daemon off;" -c /etc/nginx/nginx.conf
    ports:
        - 80:80
    labels:
        - "interlock.ext.name=nginx"
    environment:
      - "constraint:node==dts"
    logging:
      driver: json-file
      options:
        max-size: "16m"
        max-file: "10"
        labels: nginx

volumes:
  user-db-data:
    driver: local

networks:
  default:
    external:
      name: dev-net
tomskip123 commented 8 years ago

I did similar steps to @demaniak i created an external overlay network --subnet=10.0.9.0/24 then ran my docker-compose up -d nginx && docker-compose up -d interlock

So different subnet. That shouldn't effect it should it?

ehazlett commented 8 years ago

I was wondering how you were routing between subnets since the external is on a /16. As long as the proxy container has a container attached it won't be removed. However, if there are no more referenced containers also attached to the network it will remove the proxy container from it. The proxy container is intended to be a "black box" in that Interlock assumes full management over it.

pablo-ruth commented 8 years ago

Same problem here, I tried to use v1.1.1 without overlay support (I don't set any interlock.network label) in a Swarm cluster and when Interlock container restart proxy container, all network interfaces are dropped so the proxy is unreachable from outside. Tell me if I can help you with some more infos.

david-yu commented 8 years ago

I've also tried the same with Docker 1.11.1 and wrote up reproducible steps here: https://github.com/yongshin/docker-node-app-swarm-nginx

This is what I see in my docker-compose logs:

interlock_1  | DEBU[0006] websocket endpoints: []                       ext=nginx
interlock_1  | DEBU[0006] alias domains: []                             ext=nginx
interlock_1  | INFO[0006] test.local: upstream=192.168.99.102:32775     ext=nginx
interlock_1  | DEBU[0006] websocket endpoints: []                       ext=nginx
interlock_1  | DEBU[0006] alias domains: []                             ext=nginx
interlock_1  | INFO[0006] test.local: upstream=192.168.99.103:32778     ext=nginx
interlock_1  | DEBU[0006] websocket endpoints: []                       ext=nginx
interlock_1  | DEBU[0006] alias domains: []                             ext=nginx
interlock_1  | INFO[0006] test.local: upstream=192.168.99.103:32777     ext=nginx
interlock_1  | DEBU[0006] websocket endpoints: []                       ext=nginx
interlock_1  | DEBU[0006] alias domains: []                             ext=nginx
interlock_1  | INFO[0006] test.local: upstream=192.168.99.103:32776     ext=nginx
interlock_1  | DEBU[0006] proxy config path: /etc/nginx/nginx.conf      ext=lb
interlock_1  | DEBU[0006] detected proxy container: id=42fcc4ddf98f6e56849f7214ba35d9286958211b742fd611b28099ef5590ea1c backend=nginx  ext=lb
interlock_1  | DEBU[0006] proxyContainers: [{42fcc4ddf98f6e56849f7214ba35d9286958211b742fd611b28099ef5590ea1c [/agent1/dockernodeappswarm_nginx_1] nginx:latest nginx -g 'daemon off;' -c /etc/nginx/nginx.conf 1464130065 Up 5 seconds [{ 443 0 tcp} {192.168.99.102 80 80 tcp}] 0 0 map[com.docker.compose.oneoff:False com.docker.compose.project:dockernodeappswarm com.docker.compose.service:nginx com.docker.compose.version:1.7.1 com.docker.swarm.id:de68dd0cf07bd438030070e394a1fdaa8a7d924e4ed2acd022b1ba9f4cce1fb3 interlock.ext.name:nginx com.docker.compose.config-hash:73f7997bf690a891fad436755c1355327a668a234fd1712e00d9c844e14b1409 com.docker.compose.container-number:1] {map[dockernodeappswarm_frontend-network:{<nil> [] []  e08d32a622f9d230e1aec086b285e59248009eaa99f360335946bb261ab677cf  10.0.0.3 24   0 02:42:0a:00:00:03}]}}]  ext=lb
interlock_1  | DEBU[0006] saving proxy config                           ext=lb
interlock_1  | DEBU[0006] updating proxy config: id=42fcc4ddf98f6e56849f7214ba35d9286958211b742fd611b28099ef5590ea1c  ext=lb
interlock_1  | DEBU[0006] event received: status=extract-to-dir id=42fcc4ddf98f6e56849f7214ba35d9286958211b742fd611b28099ef5590ea1c type=container action=extract-to-dir 
interlock_1  | DEBU[0006] notifying extension: lb                      
interlock_1  | DEBU[0006] signaling reload                              ext=lb
interlock_1  | DEBU[0008] event received: status=kill id=42fcc4ddf98f6e56849f7214ba35d9286958211b742fd611b28099ef5590ea1c type=container action=kill 
interlock_1  | DEBU[0008] notifying extension: lb                      
interlock_1  | INFO[0008] restarted proxy container: id=42fcc4ddf98f name=/agent1/dockernodeappswarm_nginx_1  ext=nginx
interlock_1  | DEBU[0008] triggering proxy network cleanup              ext=lb
interlock_1  | INFO[0008] reload duration: 2099.93ms                    ext=lb
interlock_1  | DEBU[0008] checking to remove proxy containers from networks  ext=lb
interlock_1  | DEBU[0008] disconnecting proxy container from network: id=42fcc4ddf98f6e56849f7214ba35d9286958211b742fd611b28099ef5590ea1c net=dockernodeappswarm_frontend-network  ext=lb
interlock_1  | DEBU[0008] event received: status= id= type=network action=disconnect 
interlock_1  | DEBU[0008] notifying extension: lb                      
amsdard commented 8 years ago

Hi guys,

I have very same problem, whenever interlock reloads nginx it drops all its networks.

Info: docker-compose: 1.7.1 docker sever version: 1.11.1 swarm/1.2.2

version: "2"

services:
  interlock:
    image: ehazlett/interlock:1.1.3
    command: -D run -c /etc/interlock/config.toml
    container_name: interlock
    ports:
      - 8080
    environment:
        INTERLOCK_CONFIG: |
            ListenAddr = ":8080"
            DockerURL = "172.28.128.200:2376"
            TLSCACert = "/etc/docker/ca.pem"
            TLSCert = "/etc/docker/server.pem"
            TLSKey = "/etc/docker/server-key.pem"

            [[Extensions]]
            Name = "nginx"
            ConfigPath = "/etc/nginx/nginx.conf"
            PidPath = "/var/run/nginx.pid"
            TemplatePath = ""
            MaxConn = 1024
            Port = 80
            NginxPlusEnabled = false
    volumes:
      - /etc/docker:/etc/docker
      - nginx:/etc/nginx

  nginx:
    image: nginx:latest
    entrypoint: nginx
    command: -g "daemon off;" -c /etc/nginx/nginx.conf
    ports:
      - "80:80"
    labels:
      - "interlock.ext.name=nginx"
    links:
      - interlock:interlock
    volumes:
      - nginx:/etc/nginx

volumes:
  nginx:
    driver: local

Swarm contains 2 nodes (virtual boxes) created by docker machine, with generic driver:

Containers: 7
 Running: 7
 Paused: 0
 Stopped: 0
Images: 23
Server Version: swarm/1.2.2
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 2
 debian8: 172.28.128.211:2376
  └ ID: MNYJ:RQTD:JAFM:GIBN:DS47:ZAY6:RSFV:QVDS:O5HT:EWMQ:PNHY:DIHD
  └ Status: Healthy
  └ Containers: 2
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.026 GiB
  └ Labels: executiondriver=, kernelversion=3.16.0-4-amd64, operatingsystem=Debian GNU/Linux 8 (jessie), provider=generic, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-05-27T14:17:01Z
  └ ServerVersion: 1.11.1
 ubuntu14: 172.28.128.200:2376
  └ ID: OIOF:FAZG:KW24:3ARX:RM3M:Y5QM:WCD5:JTHB:LNE2:AFSM:7J67:SLAN
  └ Status: Healthy
  └ Containers: 5
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.019 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-74-generic, operatingsystem=Ubuntu 14.04.4 LTS, provider=generic, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-05-27T14:17:10Z
  └ ServerVersion: 1.11.1
Plugins:
 Volume:
 Network:
Kernel Version: 3.13.0-74-generic
Operating System: linux
Architecture: amd64
CPUs: 2
Total Memory: 2.045 GiB
Name: 696cadf2d8b4
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
WARNING: No kernel memory limit support

Interlock logs:

interlock    | time="2016-05-27T14:24:49Z" level=info msg="interlock 1.1.3 (bfc3d98)"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="loading config from environment"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="using tls for communication with docker"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="docker client: url=172.28.128.200:3376"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="loading extension: name=nginx"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="using internal configuration template" ext=lb
interlock    | time="2016-05-27T14:24:49Z" level=info msg="interlock node: id=b3cd3be49aa6f40667c0738b6d557e0a97608cbf49855231bac0f7dfd864af52" ext=lb
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="starting event handling"
interlock    | time="2016-05-27T14:24:49Z" level=info msg="using event stream"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="event received: status=interlock-start id=0 type= action="
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="notifying extension: lb"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="triggering reload" ext=lb
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="event received: status=attach id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314 type=container action=attach"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="notifying extension: lb"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="event received: status= id= type=network action=disconnect"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="notifying extension: lb"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="inspecting container: id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314" ext=lb
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="checking container labels: id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314" ext=lb
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="ignoring proxy container: id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314" ext=lb
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="event received: status= id= type=network action=connect"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="notifying extension: lb"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="inspecting container: id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314" ext=lb
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="checking container labels: id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314" ext=lb
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="ignoring proxy container: id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314" ext=lb
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="event received: status= id= type=volume action=mount"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="notifying extension: lb"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="event received: status=start id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314 type=container action=start"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="notifying extension: lb"
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="inspecting container: id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314" ext=lb
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="checking container labels: id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314" ext=lb
interlock    | time="2016-05-27T14:24:49Z" level=debug msg="ignoring proxy container: id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314" ext=lb
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="reaping key: reload"
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="triggering reload from cache" ext=lb
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="checking to reload" ext=lb
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="updating load balancers" ext=lb
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="generating proxy config" ext=lb
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="websocket endpoints: []" ext=nginx
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="alias domains: []" ext=nginx
interlock    | time="2016-05-27T14:24:53Z" level=info msg="test.local: upstream=172.28.128.211:32781" ext=nginx
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="proxy config path: /etc/nginx/nginx.conf" ext=lb
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="detected proxy container: id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314 backend=nginx" ext=lb
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="proxyContainers: [{45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314 [/ubuntu14/proxy_nginx_1] nginx:latest nginx -g 'daemon off;' -c /etc/nginx/nginx.conf 1464351495 Up 3 seconds [{172.28.128.200 80 80 tcp} { 443 0 tcp}] 0 0 map[com.docker.compose.container-number:1 com.docker.compose.oneoff:False com.docker.compose.version:1.7.1 interlock.ext.name:nginx com.docker.swarm.id:bc0861ad423c490353b82028e9701b3c28c1f8d980a9cd5c522a02468508d099 com.docker.compose.config-hash:c3814c416b84461eb48defdac2294eebd2243f9df646e285e5271399d07c289c com.docker.compose.project:proxy com.docker.compose.service:nginx com.docker.swarm.constraints:[\"node==ubuntu14\"]] {map[proxy_default:{<nil> [] []  1956424102ad6e0923b5d2dc9540c62c9e7530a316af9cce99f853b2de9243c5  10.0.1.3 24   0 02:42:0a:00:01:03}]}}]" ext=lb
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="saving proxy config" ext=lb
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="updating proxy config: id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314" ext=lb
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="event received: status= id= type=volume action=mount"
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="notifying extension: lb"
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="event received: status=extract-to-dir id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314 type=container action=extract-to-dir"
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="notifying extension: lb"
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="event received: status= id= type=volume action=unmount"
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="notifying extension: lb"
interlock    | time="2016-05-27T14:24:53Z" level=debug msg="signaling reload" ext=lb
interlock    | time="2016-05-27T14:24:54Z" level=debug msg="reloading proxy container: id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314" ext=nginx
interlock    | time="2016-05-27T14:24:54Z" level=info msg="restarted proxy container: id=45435e365e89 name=/ubuntu14/proxy_nginx_1" ext=nginx
interlock    | time="2016-05-27T14:24:54Z" level=debug msg="triggering proxy network cleanup" ext=lb
interlock    | time="2016-05-27T14:24:54Z" level=info msg="reload duration: 1079.26ms" ext=lb
interlock    | time="2016-05-27T14:24:54Z" level=debug msg="checking to remove proxy containers from networks" ext=lb
interlock    | time="2016-05-27T14:24:54Z" level=debug msg="event received: status=kill id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314 type=container action=kill"
interlock    | time="2016-05-27T14:24:54Z" level=debug msg="notifying extension: lb"
interlock    | time="2016-05-27T14:24:54Z" level=debug msg="disconnecting proxy container from network: id=45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314 net=proxy_default" ext=lb
interlock    | time="2016-05-27T14:24:54Z" level=debug msg="event received: status= id= type=network action=disconnect"
interlock    | time="2016-05-27T14:24:54Z" level=debug msg="notifying extension: lb"

Running containers:

pro:proxy koszi$ docker ps
CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS                                                                                   NAMES
45435e365e89        nginx:latest               "nginx -g 'daemon off"   2 hours ago         Up About a minute   172.28.128.200:80->80/tcp, 443/tcp                                                      ubuntu14/proxy_nginx_1
b3cd3be49aa6        ehazlett/interlock:1.1.3   "/bin/interlock -D ru"   2 hours ago         Up About a minute   172.28.128.200:32801->8080/tcp                                                          ubuntu14/interlock
1067a363df03        tutum/hello-world          "/bin/sh -c 'php-fpm "   40 hours ago        Up 40 hours         172.28.128.211:32781->80/tcp                                                            debian8/thirsty_murdock
80907b2ef015        progrium/consul            "/bin/start -server -"   2 days ago          Up 2 days           53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 172.28.128.200:8500->8500/tcp   ubuntu14/consul

Docker Inspect of nginx:


[
    {
        "Id": "45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314",
        "Created": "2016-05-27T12:18:15.354680295Z",
        "Path": "nginx",
        "Args": [
            "-g",
            "daemon off;",
            "-c",
            "/etc/nginx/nginx.conf"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 32405,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2016-05-27T14:24:49.478530388Z",
            "FinishedAt": "2016-05-27T14:24:46.532584224Z"
        },
        "Image": "sha256:b1fcb97bc5f6effb44ba0b5d60bf927e540dbdcfe091b1b6cd72f0081a12207c",
        "ResolvConfPath": "/var/lib/docker/containers/45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314/hostname",
        "HostsPath": "/var/lib/docker/containers/45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314/hosts",
        "LogPath": "/var/lib/docker/containers/45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314/45435e365e89436c9d3725a683b6876b7aff7dbed937e10f9ffd63b05d13f314-json.log",
        "Name": "/proxy_nginx_1",
        "RestartCount": 0,
        "Driver": "aufs",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "proxy_nginx:/etc/nginx:rw"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "proxy_default",
            "PortBindings": {
                "80/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "80"
                    }
                ]
            },
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": [],
            "CapAdd": null,
            "CapDrop": null,
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "StorageOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "BlkioIOps": 0,
            "BlkioBps": 0,
            "SandboxSize": 0
        },
        "GraphDriver": {
            "Name": "aufs",
            "Data": null
        },
        "Mounts": [
            {
                "Name": "proxy_nginx",
                "Source": "/var/lib/docker/volumes/proxy_nginx/_data",
                "Destination": "/etc/nginx",
                "Driver": "local",
                "Mode": "rw",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
        "Config": {
            "Hostname": "45435e365e89",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "443/tcp": {},
                "80/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "NGINX_VERSION=1.11.0-1~jessie"
            ],
            "Cmd": [
                "-g",
                "daemon off;",
                "-c",
                "/etc/nginx/nginx.conf"
            ],
            "Image": "nginx:latest",
            "Volumes": {
                "/etc/nginx": {}
            },
            "WorkingDir": "",
            "Entrypoint": [
                "nginx"
            ],
            "OnBuild": null,
            "Labels": {
                "com.docker.compose.config-hash": "c3814c416b84461eb48defdac2294eebd2243f9df646e285e5271399d07c289c",
                "com.docker.compose.container-number": "1",
                "com.docker.compose.oneoff": "False",
                "com.docker.compose.project": "proxy",
                "com.docker.compose.service": "nginx",
                "com.docker.compose.version": "1.7.1",
                "com.docker.swarm.constraints": "[\"node==ubuntu14\"]",
                "com.docker.swarm.id": "bc0861ad423c490353b82028e9701b3c28c1f8d980a9cd5c522a02468508d099",
                "interlock.ext.name": "nginx"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "f2ade20636bd2072457ba420dbf1199fc3fb1bc361b876a6343f12dcc22086c7",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "443/tcp": null,
                "80/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "80"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/f2ade20636bd",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {}
        }
    }
]

Reconnecting to default network:

docker network connect proxy_default proxy_nginx_1

solves the problem, but then again it needs to be done after every nginx reload :(

ehazlett commented 8 years ago

Ok I've updated and created a test image (ehazlett/interlock:test). Can you try this image and see if it fixes? If so, I'll merge and it will be in 1.2.

amsdard commented 8 years ago

Looks like it works! At least on my swarm setup. Big thanks!

david-yu commented 8 years ago

I also verified it works for me on my swarm setup as well.

ehazlett commented 8 years ago

Awesome thanks for replying! On May 27, 2016 16:31, "David Yu" notifications@github.com wrote:

I also verified it works for me on my swarm setup as well.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/149#issuecomment-222246282, or mute the thread https://github.com/notifications/unsubscribe/AAP6Il1DR6nGBrn0TiUlclTbUmehokdSks5qF1SrgaJpZM4IcQm1 .

tomskip123 commented 8 years ago

Great! It works on my swarm cluster as well. how long do you think until you release 1.2. I will be running the test image on my production cluster until then. Is that risky?

ehazlett commented 8 years ago

It shouldn't be too long before release. As long as you don't pull anything the image won't change on your local system and you should be good to test. Thanks!

ehazlett commented 8 years ago

Fixed in 401dbd909c5a8778bfb362f3b08ec8bc112ef6b8

lcamilo15 commented 7 years ago

The master image doesn't seem to have these changes