ehazlett / interlock

Docker Event Driven Plugin System
Apache License 2.0
978 stars 130 forks source link

Feature Request: possibility to proxy based on domain and context #206

Closed kmoens closed 7 years ago

kmoens commented 8 years ago

We have a setup where we want to proxy based on domain and context, instead of one of both.

For example: http://dev-shared/app1 -> container1 http://dev-shared/app2 -> container2 http://nb-shared/app1 -> container3 http://nb-shared/app2 -> container4

Currently this doesn't look to be supported, even with custom templates it looks to me that the host information gets lost as soon as I supply a context name.

mbentley commented 8 years ago

That should work with the existing context root support. Provide interlock.hostname, interlock.domain, and interlock.context_root for each container.

kmoens commented 8 years ago

Hmm, that doesn't seems so.

I've configured my container with the following labels:

        labels:
            - "interlock.hostname=dev-docker-1"
            - "interlock.domain=int.cipal.be"
            - "interlock.context_root=/porta"

Still the generated haproxy.cfg file shows no trace of the host name:

frontend http-default
    bind *:80

    monitor-uri /haproxy?monitor
    stats realm Stats
    stats auth admin:interlock
    stats enable
    stats uri /haproxy?stats
    stats refresh 5s
    acl url_porta path_beg /porta
    use_backend ctx_porta if url_porta
    acl url_SpectraWeb path_beg /SpectraWeb
    use_backend ctx_SpectraWeb if url_SpectraWeb

backend ctx_porta
    acl missing_slash path_reg ^/porta[^/]*$
    redirect code 301 prefix / drop-query append-slash if missing_slash

    http-response add-header X-Request-Start %Ts.%ms
    balance roundrobin

    server dev_porta_1 172.17.0.1:32813 check inter 5000

What I would expect is something like this:

frontend http-default
    bind *:80

    monitor-uri /haproxy?monitor
    stats realm Stats
    stats auth admin:interlock
    stats enable

    stats uri /haproxy?stats
    stats refresh 5s

    acl appSpectra       path_beg /SpectraWeb
    acl appPorta         path_beg /porta

    acl streamDev        hdr_beg(host)   dev-docker-1.int.cipal.be

    use_backend dev-spectra     if streamDev appSpectra
    use_backend dev-porta       if streamDev appPorta

backend dev-spectra
    balance roundrobin
    server dev-spectra dev_spectra_1

backend dev-porta
    balance roundrobin
    server dev-porta dev_porta_1

I've tried also with a custom template file, but the variable {{ $host.domain }} does not contain the domain information anymore.

This is also what I see when I read the code at ext/lb/haproxy/generate.go, line 47-54. The hostname/domain info seems to get lost (intentionally?) as soon as a context root is defined.

mbentley commented 8 years ago

Sounds like a bug then possibly. I'm quite certain it works with nginx for the software load balancer.

adpjay commented 7 years ago

It looks as though this is a problem in the master branch as of 4/26/2017 https://github.com/ehazlett/interlock/blob/master/ext/lb/nginx/generate.go#L50

The logic replaces the domain name with the context root and ends up grouping all contextRoots together regardless of domain or host. This seems like a defect. We have the same exact use case as the @kmoens and it does not appear to be working with nginx. Even worse than not having support for the same context_root value routed to different containers, all containers with the same context_root are added to the list of servers to be routed. In the example of @kmoens, a request to http://dev-shared/app2 could get routed to container4 instead of container2 as would be expected. The logic looks similar in the haproxy code as well.

ahalem commented 7 years ago

Matt, Is that something you can help us sponsor to fix ? I am going to open ticket about it as well.

mbentley commented 7 years ago

Funny you ask; was literally just chatting about this issue. It would be good to have a case open for us to track as well but yeah, this will be looked at.

ahalem commented 7 years ago

Thank you guys, here is the ticket number: Case Number: 00021036

———————————————————————

Amr Abdelhalem | Chief Architect Global Enterprise Technology & Solutions (GETS) | ADP, LLC – Roseland, NJ - Office: 973-974-7134 ”A Dreamer, debater , synthesizer and executor are the power of Innovation .. the breakdown happens when they spent their time disagreeing with each other !”

From: Matt Bentley notifications@github.com Reply-To: ehazlett/interlock reply@reply.github.com Date: Thursday, April 27, 2017 at 10:12 AM To: ehazlett/interlock interlock@noreply.github.com Cc: "Amr (CORP) Abdelhalem" Amr.Abdelhalem@ADP.com, Comment comment@noreply.github.com Subject: Re: [ehazlett/interlock] Feature Request: possibility to proxy based on domain and context (#206)

Funny you ask; was literally just chatting about this issue. It would be good to have a case open for us to track as well but yeah, this will be looked at.

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/ehazlett/interlock/issues/206#issuecomment-297725319, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AH2Lx9wXqq2ffVDY4letIqZcEqmZBowsks5r0KJhgaJpZM4KiMQ7.


This message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, notify the sender immediately by return email and delete the message and any attachments from your system.

ehazlett commented 7 years ago

Opened #216. Please test -- you can use the image ehazlett/interlock:dev with both HAProxy and nginx.

ehazlett commented 7 years ago

/cc @ahalem @kmoens @adpjay ^

ehazlett commented 7 years ago

Any update? Is this working for you?

adpjay commented 7 years ago

It does not appear to be working. We're now getting duplicated entries in the nginx.conf for the context_root and are still not qualified by dns name. The result is that no routing happens and we get a 503 error.

nginx.conf

# managed by interlock
user  www-data;
worker_processes  2;
worker_rlimit_nofile 65535;
error_log  /var/log/error.log warn;
pid        /etc/nginx/nginx.pid;
events {
    worker_connections  1024;
}
http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    server_names_hash_bucket_size 128;
    client_max_body_size 2048M;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"'
                        'rt=$request_time uct="$upstream_connect_time" uht="$upstream_header_time" urt="$upstream_response_time" host_header="$http_host"';

    access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    #tcp_nopush     on;
    keepalive_timeout  65;
    # If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
    # scheme used to connect to this server
    map $http_x_forwarded_proto $proxy_x_forwarded_proto {
      default $http_x_forwarded_proto;
      ''      $scheme;
    }
    #gzip  on;
    proxy_connect_timeout 600;
    proxy_send_timeout 600;
    proxy_read_timeout 600;
    proxy_set_header        X-Real-IP         $remote_addr;
    proxy_set_header        X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header        X-Forwarded-Proto $proxy_x_forwarded_proto;
    proxy_set_header        Host              $http_host;
    send_timeout 600;
    # ssl
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
    map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
    }
    # default host return 503
    server {
            listen 80;
            server_name _;
            underscores_in_headers on;
            ignore_invalid_headers off;
            location / {
                return 503;
            }

            location /root1 {
                 if ($request_method = "GET" ) { rewrite ^([^.]*[^/])$ $1/ permanent; }
                rewrite  ^/root1/(.*)  /$1 break;
                proxy_pass http://ctx_root1;
            }

            location /root1 {
                 if ($request_method = "GET" ) { rewrite ^([^.]*[^/])$ $1/ permanent; }
                rewrite  ^/root1/(.*)  /$1 break;
                proxy_pass http://ctx_root1;
            }

            location /nginx_status {
                stub_status on;
                access_log off;
            }
    }

    upstream ctx_root1 {
        zone ctxinterlock-route-a-test.benefits.example.com_backend 64k;
        server 11.16.44.82:32783;

    } 

    upstream ctx_root1 {
        zone ctxinterlock-route-b-test.benefits.example.com_backend 64k;
        server 11.16.44.82:32782;

    } 

    include /etc/nginx/conf.d/*.conf;
}

The above was created with the two docker compose files below:

docker-compose.A1.yml

version: '2'
services:
  webA1:
    image: interlock-context_root-test:latest
    labels:
      interlock.hostname: interlock-route-a-test
      interlock.domain: example.com
      interlock.port: "80"
      interlock.context_root: /root1
      interlock.context_root_rewrite: "true"
    network_mode: bridge
    ports:
     - "80"
    restart: always
    environment:
      - runtime_content=App A root 1

docker-compose.B1.yml

version: '2'
services:
  webB1:
    image: interlock-context_root-test:latest
    labels:
      interlock.hostname: interlock-route-b-test
      interlock.domain: example.com
      interlock.port: "80"
      interlock.context_root: /root1
      interlock.context_root_rewrite: "true"
    network_mode: bridge
    ports:
     - "80"
    restart: always
    environment:
      - runtime_content=App B root 1
ehazlett commented 7 years ago

@adpjay can you give me the version from the logs? It appears to be working with the ehazlett/interlock:dev image listed above:

    # default host return 503
    server {
            listen 80;
            server_name _;

            location / {
                return 503;
            }

            location /nginx_status {
                stub_status on;
                access_log off;
            }
    }

    upstream interlock-route-a-test.example.com {
        zone interlock-route-a-test.example.com_backend 64k;
        server 172.17.0.1:32772;
    }
    server {
        listen 80;
    location /root1 {
        rewrite ^([^.]*[^/])$ $1/ permanent;
        rewrite  ^/root1/(.*)  /$1 break;
        proxy_pass http://interlock-route-a-test.example.com;
    }

        server_name interlock-route-a-test.example.com;

        location / {
            proxy_pass http://interlock-route-a-test.example.com;
        }   
    }
adpjay commented 7 years ago

I get the same nginx.conf as you when one service is created (the "A1" service, for example). But when another service is added the routes get duplicated and only the first one works.

niroowns commented 7 years ago

@ehazlett - we are using: INFO[0000] interlock 1.4.0-dev (c610d87)

adpjay commented 7 years ago

To clarify: @niroowns answered for me with the version used in my test

ehazlett commented 7 years ago

Hmm this is what I get when using your compose files. Two separate contexts with two separate hostnames:

https://gist.github.com/ehazlett/b1178b2a6bc757ac3440d2779a2e1acc

adpjay commented 7 years ago

That looks much better than what we're seeing. I'm building a test harness for this now using our internal docker swarm. Do you have a self-contained way (easier, more portable) way to test this so we can share it?

ehazlett commented 7 years ago

basically i use the example config from docs/examples/nginx and then your compose file (replacing the image with nginx).

i want to get an integration suite I just haven't had time yet

ehazlett commented 7 years ago

@adpjay can you post your nginx.conf? They look similar but they should have different server_name.

adpjay commented 7 years ago

@ehazlett I posted the nginx.conf above. there is only one server_name but multiple location /root1 entries within it.

ehazlett commented 7 years ago

Yes this looks like an old version of interlock. Are you sure you are using the dev tag in your tests?

On May 5, 2017 19:24, "adpjay" notifications@github.com wrote:

@ehazlett https://github.com/ehazlett I posted the nginx.conf above. there is only one server_name but multiple location /root1 entries within it.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/206#issuecomment-299597341, or mute the thread https://github.com/notifications/unsubscribe-auth/AAP6IvbUgMhlc2WBPF-MoiGtGxc_cNpHks5r26-WgaJpZM4KiMQ7 .

adpjay commented 7 years ago

I didn't start the one I originally tested against, but the logs had this at the start: interlock 1.4.0-dev (c610d87)

However, I just went through your "Getting Started" guide locally on my mac with the interlock:dev version and the nginx.conf was generated correctly when a new container is added. Could it be that a configuration point on my original interlock:dev test is interfering?

adpjay commented 7 years ago

OK, it looks like the nginx.conf.template file was being used in the failing scenario with the latest code. I'll remove that and try again.

ehazlett commented 7 years ago

Ah ya if you are using a custom template that would need to be updated. Sorry!

adpjay commented 7 years ago

The custom nginx.conf.template was the culprit. I removed that from the setup and I'm now getting the correct nginx.conf file for B1 and A1. However, now I have another scenario that is not working: A1 and A2 together. See the two docker-compose files below:

docker.compose.A1.yml

version: '2'
services:
  webA1:
    image: interlock-context_root-test:latest
    labels:
      interlock.hostname: interlock-route-a-test
      interlock.domain: example.com
      interlock.port: "80"
      interlock.context_root: /root1
      interlock.context_root_rewrite: "true"
    network_mode: bridge
    ports:
     - "80"
    restart: always
    environment:
      - runtime_content=App A root 1

docker.compose.A2.yml

version: '2'
services:
  webA1:
    image: interlock-context_root-test:latest
    labels:
      interlock.hostname: interlock-route-a-test
      interlock.domain: example.com
      interlock.port: "80"
      interlock.context_root: /root2
      interlock.context_root_rewrite: "true"
    network_mode: bridge
    ports:
     - "80"
    restart: always
    environment:
      - runtime_content=App A root 2

The resulting nginx.conf

Note that the /root2 context root doesn't show up at all and calls to http://interlock-route-a-test.example.com/root1/ alternate between container A1 and container A2.

The ideal behavior is:

adpjay commented 7 years ago

Attached is a test on a local swarm reproducing the problem. The test is executed from maven or junit. Two of the tests pass and two of the tests fail test.zip

ehazlett commented 7 years ago

Ok. Just so I can understand what you are trying to do: You want multiple context roots to route to different containers under the same hostname?

adpjay commented 7 years ago

Exactly. Yes. Let me know how else I can help. Thank you

ehazlett commented 7 years ago

@adpjay ok -- that's essentially a new feature and will take some more work. currently only single context roots per host are supported.

adpjay commented 7 years ago

Multiple context roots per host are supported with interlock:1.3.0 and it will work perfectly as long as the context_roots are globally unique. So, so sum up: Working with version 1.3.0: container-a.example.com/root1 container-a.example.com/root2 container-b.example.com/root3

Failing with version 1.3.0: container-a.example.com/root1 container-b.example.com/root1

Working with version "dev": container-a.example.com/root1 container-b.example.com/root2

Failing with version "dev": container-a.example.com/root1 container-a.example.com/root2

I was hoping to work around the issue with a custom template but it doesn't work. Given the current state, I'd say that for our purposes, the dev version is a degradation over the release version.

ehazlett commented 7 years ago

@adpjay interesting. it wasn't designed for that but perhaps the template has changed that removed that behavior. i'll take a look. thanks!!

ehazlett commented 7 years ago

@kmoens @adpjay i think this is what you are looking for:

    upstream test.local {      
        zone test.local_backend 64k;
        server 172.17.0.1:32776;                                       
    }                                      

    upstream ctxlocal__app {
        zone ctxlocal__app_backend 64k;
        server 172.17.0.1:32772;                                       
    }
    upstream ctxlocal__app2 {
        zone ctxlocal__app2_backend 64k;
        server 172.17.0.1:32774;
    }
    upstream ctxlocal__app3 {
        zone ctxlocal__app3_backend 64k;
        server 172.17.0.1:32777;
        server 172.17.0.1:32775;
    }

    server {
        listen 80;
        server_name test.local;

        location /app {
            rewrite ^([^.]*[^/])$ $1/ permanent;
            rewrite  ^/app/(.*)  /$1 break;
            proxy_pass http://ctxlocal__app;
        }

        location /app2 {
            rewrite ^([^.]*[^/])$ $1/ permanent;
            rewrite  ^/app2/(.*)  /$1 break;
            proxy_pass http://ctxlocal__app2;
        }

        location /app3 {
            rewrite ^([^.]*[^/])$ $1/ permanent;
            rewrite  ^/app3/(.*)  /$1 break;
            proxy_pass http://ctxlocal__app3;
        }

Using run commands:

$> docker run -ti -d -P --label interlock.hostname=test --label interlock.domain=local nginx

$> docker run -d -ti --rm -P --label interlock.hostname=test --label interlock.domain=local --label interlock.context_root=/app --label interlock.context_root_rewrite=1 nginx
$> docker run -ti -d -P --label interlock.hostname=test --label interlock.domain=local --label interlock.context_root=/app2 --label interlock.context_root_rewrite=1 nginx
$> docker run -ti -d -P --label interlock.hostname=test --label interlock.domain=local --label interlock.context_root=/app3 --label interlock.context_root_rewrite=1 nginx

Can you please test the latest image ehazlett/interlock:dev and see if that is what you are looking for?

Thanks!

adpjay commented 7 years ago

@ehazlett - The test case doesn't match what we're looking for. What you have in your comment will work today without modification. What we need is that plus the ability to use the same context_root for different hosts. The run commands you listed above all use the same host name. Are you able to run the tests I sent above?

ehazlett commented 7 years ago

@adpjay in that case you should be able to use an alias as mentioned in the docs. your compose files both have the webA1 service (I assume you mean webA2) and do not specify the requirement to have the same context roots on different hosts. In fact, the comment here:

The ideal behavior is:

calls to http://interlock-route-a-test.example.com/root1/ route to container A1
calls to http://interlock-route-a-test.example.com/root2/ route to container A2

...says the opposite. Different context roots under the same hostname. Sorry, I'm just confused with what you are trying to do.

In your comment above:

Working with version 1.3.0:
container-a.example.com/root1
container-a.example.com/root2
container-b.example.com/root3

This would be solved with the latest build. You would have containers that specify the host for container-a and root1 / root2 as well as another host container-b with /root3. This should be supported in the latest build. I will try to get a test compose file to confirm.

Thanks!

adpjay commented 7 years ago

@ehazlett Yes, I did have a mistake in the test example. The second "webA1" service should be named "webB2". I pulled the latest ehazlett/interlock:dev and the nginx.conf looks very good! My test doesn't work anymore because it has a template that references the ContextRoot property of the host, which I guess doesn't exist anymore. level=error msg="template: lb:67:19: executing "lb" at <$host.ContextRoot.Pa...>: ContextRoot is not a field of struct type *nginx.Host" ext=lb

Is that field no longer available?

ehazlett commented 7 years ago

Your custom template will not work as I had to rework the internals to support this. With this work you shouldn't be using a custom template. Since custom templates are provided as a convenience there is no guarantee they will work. Only the builtin is ensured to work.

adpjay commented 7 years ago

I looked at your change and I can see how I can fix my custom template. The main reason we have a custom template is to update the log_format to include the $upstream_addr field so that we can do better egress logging in the nginx container. I'd love if we had a better way to include that.

In my test, I have a custom template because I get an error with the generated template because of the "user" directive on line 2. 2017/05/19 22:35:26 [emerg] 20#20: getpwnam("www-data") failed in /etc/nginx/nginx.conf:2 nginx: [emerg] getpwnam("www-data") failed in /etc/nginx/nginx.conf:2

ehazlett commented 7 years ago

Cool. Ya effectively you are maintaining a "fork" of the template so you will need to just "rebase" your template off of the new one. Sounds good!

On May 22, 2017 16:57, "adpjay" notifications@github.com wrote:

I looked at your change and I can see how I can fix my custom template. The main reason we have a custom template is to update the log_format to include the $upstream_addr field so that we can do better egress logging in the nginx container. In my test, I have a custom template because I get an error with the generated template because of the "user" directive on line 2. 2017/05/19 22:35:26 [emerg] 20#20: getpwnam("www-data") failed in /etc/nginx/nginx.conf:2 nginx: [emerg] getpwnam("www-data") failed in /etc/nginx/nginx.conf:2

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ehazlett/interlock/issues/206#issuecomment-303218090, or mute the thread https://github.com/notifications/unsubscribe-auth/AAP6Iu0bX7z8-2GkPS__tuKugJfuy-t_ks5r8fbFgaJpZM4KiMQ7 .

adpjay commented 7 years ago

I'm sorry, I was overstating how good the nginx.conf file looked (it did look good, but still has some functional issues). I couldn't work around them with my own custom template because we don't have all the data available to us. I think we'll need a code change for this. Two suggestions:

  1. include the host name and domain name in the Name property of the ContextRoot struct (currently, just the domain name is in there, which causes a collision for containers sharing a domain and a root but have different hosts)
  2. Only include the non-context root upstream if there is a container that doesn't have a interlock.context_root label. All containers in the attached test use context roots. I have one more request: in template.go, include [$upstream_addr] so that we can track which server and port the request actually went to

Here is an updated test with the issues fixed. You should be able to run it with maven. or just junit. test.zip

ehazlett commented 7 years ago

@adpjay ok the latest fixes (and latest build ehazlett/interlock:dev - image id 2657d5413d8b) have your tests passing:

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running com.adp.interlock.TestRouting
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.824 sec

Results :

Tests run: 4, Failures: 0, Errors: 0, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 27.972 s
[INFO] Finished at: 2017-05-23T14:41:58Z
[INFO] Final Memory: 12M/30M
[INFO] ------------------------------------------------------------------------

Please try the latest build and see if that solves it. Thanks for the feedback!

adpjay commented 7 years ago

@ehazlett Thanks! I'm glad to see the test worked out. I tried pulling an updated image but it was reported that my image local image was up-to-date. Should there be a new image on dock hub - https://hub.docker.com/r/ehazlett/interlock/tags/

Do you have any thoughts on including the [$upstream_addr] in the log_format directive?

ehazlett commented 7 years ago

@adpjay ah yes sorry -- i was on crappy wifi and my connection dropped during push. it's up to date now. thanks!

I've also added the $upstream_addr to this build as well. Thanks!

adpjay commented 7 years ago

@ehazlett Very nice! I added one more test (attached) without using context_root in the same cluster and it works great. Thanks very much!

test.zip

ehazlett commented 7 years ago

@adpjay awesome thanks for testing! i'll cut a release later today with this merged.

ehazlett commented 7 years ago

@adpjay the latest version (1.3.2) has been released. Can you test to make sure it's passing on your side? Thanks!

adpjay commented 7 years ago

@ehazlett The tests pass with 1.3.2. Thank you!

On thing, that maybe you can help with my test: After starting up the docker-compose, I still need to reload the nginx configuration manually. The log doesn't show anything obvious to me:

interlock_1  | time="2017-05-20T09:54:17Z" level=info msg="interlock 1.3.2-dev (408d3b9)"
interlock_1  | time="2017-05-20T09:54:17Z" level=info msg="interlock node: container id=a643083b21919c69449c4ba0263bb85f0f9f47fa3d9eea6b66e7d87e0c095521" ext=lb
webB1_1      | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
webA2_1      | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName' directive globally to suppress this message
webB1_1      | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
interlock_1  | time="2017-05-20T09:54:17Z" level=info msg="using event stream"
webB1_1      | [Sat May 20 09:54:20.666635 2017] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.10 (Debian) PHP/7.0.18 configured -- resuming normal operations
webB1_1      | [Sat May 20 09:54:20.666679 2017] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
webB2_1      | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.4. Set the 'ServerName' directive globally to suppress this message
interlock_1  | time="2017-05-20T09:54:20Z" level=info msg="interlock-route-b-test.example.com: upstream=172.17.0.1:32881" ext=nginx
webA2_1      | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName' directive globally to suppress this message
webC_1       | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.5. Set the 'ServerName' directive globally to suppress this message
webB2_1      | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.4. Set the 'ServerName' directive globally to suppress this message
interlock_1  | time="2017-05-20T09:54:20Z" level=info msg="interlock-route-a-test.example.com: upstream=172.17.0.1:32880" ext=nginx
webB2_1      | [Sat May 20 09:54:20.877079 2017] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.10 (Debian) PHP/7.0.18 configured -- resuming normal operations
webC_1       | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.5. Set the 'ServerName' directive globally to suppress this message
webA1_1      | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.6. Set the 'ServerName' directive globally to suppress this message
webB2_1      | [Sat May 20 09:54:20.877119 2017] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
webA2_1      | [Sat May 20 09:54:20.847109 2017] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.10 (Debian) PHP/7.0.18 configured -- resuming normal operations
webC_1       | [Sat May 20 09:54:21.032949 2017] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.10 (Debian) PHP/7.0.18 configured -- resuming normal operations
interlock_1  | time="2017-05-20T09:54:20Z" level=info msg="interlock-route-b-test.example.com: upstream=172.17.0.1:32879" ext=nginx
webA2_1      | [Sat May 20 09:54:20.847148 2017] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
webA1_1      | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.6. Set the 'ServerName' directive globally to suppress this message
webC_1       | [Sat May 20 09:54:21.033005 2017] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
webA1_1      | [Sat May 20 09:54:21.215399 2017] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.10 (Debian) PHP/7.0.18 configured -- resuming normal operations
webA1_1      | [Sat May 20 09:54:21.215458 2017] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
interlock_1  | time="2017-05-20T09:54:23Z" level=info msg="reload duration: 3923.39ms" ext=lb
interlock_1  | time="2017-05-20T09:54:23Z" level=info msg="interlock-route-a-test.example.com: upstream=172.17.0.1:32883" ext=nginx
interlock_1  | time="2017-05-20T09:54:23Z" level=info msg="interlock-route-c-test.example.com: upstream=172.17.0.1:32882" ext=nginx
interlock_1  | time="2017-05-20T09:54:23Z" level=info msg="interlock-route-b-test.example.com: upstream=172.17.0.1:32881" ext=nginx
interlock_1  | time="2017-05-20T09:54:23Z" level=info msg="interlock-route-a-test.example.com: upstream=172.17.0.1:32880" ext=nginx
interlock_1  | time="2017-05-20T09:54:23Z" level=info msg="interlock-route-b-test.example.com: upstream=172.17.0.1:32879" ext=nginx
interlock_1  | time="2017-05-20T09:54:26Z" level=info msg="reload duration: 2711.54ms" ext=lb
ehazlett commented 7 years ago

Hmm I just stopped one of the services while all were running and it properly updated nginx. Seems to be working fine here. If you continue to get it, enable debug (--debug) for interlock and try the reload. You should see the proxy containers it tries to restart.

audacity410 commented 7 years ago

Hi ehazlett, does the context root of the request URL have to be the same as the context root of the interlock URL destination? for example: http://domain1.com/app1 -> http://domain2.com/app2 Will this example work? We only got it working with the same context root paths.