adi90x / rancher-active-proxy

All in one active reverse proxy for Rancher ! For Kubernetes : https://github.com/adi90x/kube-active-proxy
MIT License
156 stars 55 forks source link

nginx.conf always contains duplicates. #39

Open KiaArmani opened 6 years ago

KiaArmani commented 6 years ago

I've installed rancher-active-proxy via the Catalogue and launched it with these Commands

CRON |   | 0 2 * * * |  
-- | -- | -- | --
DEBUG |   | True |  
DEFAULT_EMAIL |   | my@mail.tld |  
DEFAULT_HOST |   | my.tld |  
DEFAULT_PORT |   | 80

Then I added these labels to a DreamFactory container

Kind | Key | Value
-- | -- | --
User | rap.port | PUBLIC_PORT_OF_CONTAINER
User | rap.le_host | factory.my.tld
User | rap.le_email | my@mail.tld
User | rap.host | factory.my.tld

and restarted rancher-active-proxy, but got this in the log.

8.12.2017 16:41:05INFO Starting rancher-gen RAP Edition master (c5670c9aeedfee7ba00b63d1dd996b271e7249d9)
8.12.2017 16:41:05INFO Initializing Rancher Metadata client (version latest)
8.12.2017 16:41:05INFO Processing all templates once.
8.12.2017 16:41:05INFO Destination file %s has been updated/app/letsencrypt.conf
8.12.2017 16:41:05INFO All templates processed. Exiting.
8.12.2017 16:41:05forego      | starting nginx.1 on port 5000
8.12.2017 16:41:05forego      | starting ranchergen.1 on port 5100
8.12.2017 16:41:05forego      | starting cron.1 on port 5300
8.12.2017 16:41:05crond[49]: crond (busybox 1.25.1) started, log level 2
8.12.2017 16:41:05crond[49]: user:root entry:0 2 * * * /app/letsencrypt.sh
8.12.2017 16:41:05ranchergen.1 | level=info msg="Starting rancher-gen RAP Edition master (c5670c9aeedfee7ba00b63d1dd996b271e7249d9)"
8.12.2017 16:41:05ranchergen.1 | level=info msg="Initializing Rancher Metadata client (version latest)"
8.12.2017 16:41:05ranchergen.1 | level=info msg="Polling Metadata with %d second interval10"
8.12.2017 16:41:05ranchergen.1 | level=info msg="Destination file %s has been updated/etc/nginx/conf.d/default.conf"
8.12.2017 16:41:05ranchergen.1 | level=info msg="[nginx -s reload]: \"2017/12/08 15:41:05 [emerg] 62#62: duplicate upstream \\\"factory.my.tld\\\" in /etc/nginx/conf.d/default.conf:86\""
8.12.2017 16:41:05ranchergen.1 | level=error msg="Notify command failed: exit status 1"
8.12.2017 16:41:15ranchergen.1 | level=info msg="All templates processed. Waiting for changes in Metadata..."
8.12.2017 16:42:00crond[49]: user:root entry:0 2 * * * /app/letsencrypt.sh
8.12.2017 16:42:00crond[49]: wakeup dt=55
8.12.2017 16:42:00crond[49]: file root:
8.12.2017 16:42:00crond[49]:  line /app/letsencrypt.sh
8.12.2017 16:43:00crond[49]: wakeup dt=60
8.12.2017 16:43:00crond[49]: file root:
8.12.2017 16:43:00crond[49]:  line /app/letsencrypt.sh

Checking the default.conf of nginx I see this

# Rancher Services config

upstream gsalloc.docker.kia.ovh {
        #Acces through rancher managed network
        server DOCKER_INTERNAL_IP:PUBLIC_PORT_OF_CONTAINER;
    server localhost down;
}

server {
        server_name factory.my.tld;
        listen 80  ;
        access_log /var/log/nginx/access.log vhost;

        location / {
                proxy_pass http://factory.my.tld;
        }
}

# Standalone containers config

upstream factory.my.tld {
        #Acces through rancher managed network
        server :10000;
    server localhost down;
}

server {
        server_name factory.my.tld;
        listen 80  ;
        access_log /var/log/nginx/access.log vhost;

        location / {
                proxy_pass http://factory.my.tld;
        }
}

So it's there twice. If I try to delete the duplicate and restart the container, it just gets added back. How can I resolve this?

adi90x commented 6 years ago

Sorry for the very late reply , I think you probably add 2 containers : one as a standalone containers ( I guess for rancher interface ( port 10000 ) ) and one as a rancher service ( container manage by rancher ) , no ?