kubernetes / ingress-nginx

Ingress NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.52k stars 8.25k forks source link

Default Backend Annotation Incompatible with custom-http-errors #3700

Closed ts-mini closed 5 years ago

ts-mini commented 5 years ago

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT, i think

NGINX Ingress controller version: 0.22.0 Kubernetes version (use kubectl version): 1.12.4 What happened: If you specify a default-backend override in an ingress resource while also including custom-http-errors, I believe the proxy-intercept-errors are not respecting the annotation default-backend.

What you expected to happen: That you can use custom-http-errors to proxy_intercept_errors to your custom default backend of your choosing

How to reproduce it (as minimally and precisely as possible): The yaml below has the service scaled to 0. This means the service WILL 503.
Using the yaml files referenced below, deploy it to a kube cluster (and adjust the ingress route to meet your cluster needs). Deploy it once with JUST the default-backend annotation. You will get your default backend 503 error as defined by this image. quay.io/kubernetes-ingress-controller/custom-error-pages-amd64:0.3. Now uncomment the #nginx.ingress.kubernetes.io/custom-http-errors: 404,503` annotation and it will no longer goto your default backend.

# ECHOSERVER EXAMPLE APP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: echo-svc
spec:
  replicas: 0
  selector:
    matchLabels:
      app: echo-svc
  template:
    metadata:
      labels:
        app: echo-svc
    spec:
      containers:
      - name: echo-svc
        image: gcr.io/kubernetes-e2e-test-images/echoserver:2.1
        ports:
        - containerPort: 8080
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:
  name: echo-svc
  labels:
    app: echo-svc
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP

    name: http
  selector:
    app: echo-svc
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/default-backend: nginx-errors-svc # This is referencing the SAME NAMESPACE that this resource is in
   #nginx.ingress.kubernetes.io/custom-http-errors: 404,503
  name: echo-app-ingress
spec:
  rules:
  - host: echo.domain.com
    http:
      paths:
      - backend:
          serviceName: echo-svc
          servicePort: http
        path: /
---
# ECHOSERVER CUSTOM DEFAULT BACKEND (ATTEMPTING TO OVERRIDE THE DEFAULT BACKEND DEFINED IN THE CONTROLLER INSTALLATION)
apiVersion: v1
kind: Service
metadata:
  name: nginx-errors-svc
  labels:
    app.kubernetes.io/name: nginx-errors
    app.kubernetes.io/part-of: ingress-nginx
spec:
  selector:
    app.kubernetes.io/name: nginx-errors
    app.kubernetes.io/part-of: ingress-nginx
  ports:
  - port: 80
    targetPort: 8080
    name: http
---
apiVersion: apps/v1beta2
kind: Deployment
apiVersion: apps/v1beta2
metadata:
  name: nginx-errors
  labels:
    app.kubernetes.io/name: nginx-errors
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: nginx-errors
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: nginx-errors
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      containers:
      - name: nginx-error-server
        image: quay.io/kubernetes-ingress-controller/custom-error-pages-amd64:0.3
        ports:
        - containerPort: 8080

Anything else we need to know:

Since the default-backend annotation code only adds an override for 503 - there isn't a real way to have custom-http-errors and a per-namespace default backend override with this bug

ts-mini commented 5 years ago

I think it has to do with this line of the template: https://github.com/kubernetes/ingress-nginx/blob/6618b3987c0b92e39c4d5b8ef7a48ca38e7e7dae/rootfs/etc/nginx/template/nginx.tmpl#L847 Perhaps the fact its hardcoded to "upstream-default-backend" for the custom-http-errors template block instead of the rendered "set $proxy_upstream_name "custom-default-backend-" code block?

jasongwartz commented 5 years ago

@ts-mini this was mentioned when the custom-http-error annotation was added https://github.com/kubernetes/ingress-nginx/pull/3344#issuecomment-436238468

I'm been working for a while on passing through the default bacckend to the custom error block

vijay-veeranki commented 5 years ago

Hi @jasongwartz

This is not working still. I am using below annotations, and it is not serving error page from my custom-default backend in my namespace. But it is serving from Cluster default backend.

Using this annotation i am expecting it to server 503/504 errors from my my custom-default backend in my namespace.

logs shows it is using "proxy_upstream_name": "upstream-default-backend"???

annotations: nginx.ingress.kubernetes.io/custom-http-errors: "418,504,503" nginx.ingress.kubernetes.io/default-backend: nginx-errors.

Can you help me, I am using latest helm chart and we do have "config: custom-http-errors: 413,502,503,504" but i believe it should over-ride when i used those annotations

jasongwartz commented 5 years ago

@vijay-veeranki-moj which version of ingress-nginx are you using? Can you share the relevant parts of your generated nginx.conf (the one inside the ingress-nginx container)?

vijay-veeranki commented 5 years ago

Hi @jasongwartz Thanks for your reply

I am using appVersion: 0.25.1 https://github.com/helm/charts/blob/master/stable/nginx-ingress/Chart.yaml


    error_page 413 = @custom_upstream-default-backend_413;
    error_page 502 = @custom_upstream-default-backend_502;
    error_page 503 = @custom_upstream-default-backend_503;
    error_page 504 = @custom_upstream-default-backend_504;
            port_in_redirect off;

            set $balancer_ewma_score -1;
            set $proxy_upstream_name    "upstream-default-backend";
            set $proxy_host             $proxy_upstream_name;
            set $pass_access_scheme $scheme;
            set $pass_server_port $server_port;
            set $best_http_host $http_host;
            set $pass_port $pass_server_port;

            set $proxy_alternative_upstream_name "";
    # Global filters

    ## start server _
    server {
        server_name _ ;

        listen 80 default_server reuseport backlog=511 ;
        listen 443 default_server reuseport backlog=511 ssl http2 ;

        set $proxy_upstream_name "-";

        # PEM sha: 9a5b0fb5afccd279e5cc56a363f521d81184ea2d
        ssl_certificate                         /etc/ingress-controller/ssl/default-fake-certificate.pem;
        ssl_certificate_key                     /etc/ingress-controller/ssl/default-fake-certificate.pem;

        ssl_certificate_by_lua_block {
            certificate.call()
        }

        location / {

            set $namespace      "";
            set $ingress_name   "";
            set $service_name   "";
        set $service_port   "{0 0 }";
            set $location_path  "/";

            rewrite_by_lua_block {
                lua_ingress.rewrite({
                    force_ssl_redirect = false,
                    use_port_in_redirects = false,
                })
                balancer.rewrite()
                plugins.run()
            }

            header_filter_by_lua_block {

                plugins.run()
            }
            body_filter_by_lua_block {

            }

            log_by_lua_block {

                balancer.log()

                monitor.call()

                plugins.run()
            }

            if ($scheme = https) {
                more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains";
            }

            access_log off;

            port_in_redirect off;

            set $balancer_ewma_score -1;
            set $proxy_upstream_name    "upstream-default-backend";
            set $proxy_host             $proxy_upstream_name;
            set $pass_access_scheme $scheme;
            set $pass_server_port $server_port;
            set $best_http_host $http_host;
            set $pass_port $pass_server_port;

            set $proxy_alternative_upstream_name "";

            client_max_body_size                    50m;

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;

            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Request-ID           $req_id;
            proxy_set_header X-Real-IP              $the_real_ip;

            proxy_set_header X-Forwarded-For        $the_real_ip;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;

            proxy_set_header X-Original-URI         $request_uri;

            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_buffering                         off;
            proxy_buffer_size                       16k;
            proxy_buffers                           4 16k;
            proxy_request_buffering                 on;
            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_timeout             0;
            proxy_next_upstream_tries               3;

            proxy_pass http://upstream_balancer;

            proxy_redirect                          off;

        }

        # health checks in cloud providers require the use of port 80
        location /healthz {

            access_log off;
            return 200;
        }

        # this is required to avoid error if nginx is being monitored
        # with an external software (like sysdig)
        location /nginx_status {

            allow 127.0.0.1;

            deny all;

            access_log off;
            stub_status on;
        }

        # Custom code snippet configured in the configuration configmap
        if ($scheme != 'https') {
            return 308 https://$host$request_uri;
        }

        location @custom_upstream-default-backend_413 {
            internal;

            proxy_intercept_errors off;

            proxy_set_header       X-Code             413;
            proxy_set_header       X-Format           $http_accept;
            proxy_set_header       X-Original-URI     $request_uri;
            proxy_set_header       X-Namespace        $namespace;
            proxy_set_header       X-Ingress-Name     $ingress_name;
            proxy_set_header       X-Service-Name     $service_name;
            proxy_set_header       X-Service-Port     $service_port;
            proxy_set_header       X-Request-ID       $req_id;
            proxy_set_header       Host               $best_http_host;

            set $proxy_upstream_name upstream-default-backend;

            rewrite                (.*) / break;

            proxy_pass            http://upstream_balancer;
            log_by_lua_block {

                monitor.call()

            }
        }

        location @custom_upstream-default-backend_502 {
            internal;

            proxy_intercept_errors off;

            proxy_set_header       X-Code             502;
            proxy_set_header       X-Format           $http_accept;
            proxy_set_header       X-Original-URI     $request_uri;
            proxy_set_header       X-Namespace        $namespace;
            proxy_set_header       X-Ingress-Name     $ingress_name;
            proxy_set_header       X-Service-Name     $service_name;
            proxy_set_header       X-Service-Port     $service_port;
            proxy_set_header       X-Request-ID       $req_id;
            proxy_set_header       Host               $best_http_host;

            set $proxy_upstream_name upstream-default-backend;

            rewrite                (.*) / break;

            proxy_pass            http://upstream_balancer;
            log_by_lua_block {

                monitor.call()

            }
        }

        location @custom_upstream-default-backend_503 {
            internal;

            proxy_intercept_errors off;

            proxy_set_header       X-Code             503;
            proxy_set_header       X-Format           $http_accept;
            proxy_set_header       X-Original-URI     $request_uri;
            proxy_set_header       X-Namespace        $namespace;
            proxy_set_header       X-Ingress-Name     $ingress_name;
            proxy_set_header       X-Service-Name     $service_name;
            proxy_set_header       X-Service-Port     $service_port;
            proxy_set_header       X-Request-ID       $req_id;
            proxy_set_header       Host               $best_http_host;

            set $proxy_upstream_name upstream-default-backend;

            rewrite                (.*) / break;

            proxy_pass            http://upstream_balancer;
            log_by_lua_block {

                monitor.call()

            }
        }

        location @custom_upstream-default-backend_504 {
            internal;

            proxy_intercept_errors off;

            proxy_set_header       X-Code             504;
            proxy_set_header       X-Format           $http_accept;
            proxy_set_header       X-Original-URI     $request_uri;
            proxy_set_header       X-Namespace        $namespace;
            proxy_set_header       X-Ingress-Name     $ingress_name;
            proxy_set_header       X-Service-Name     $service_name;
            proxy_set_header       X-Service-Port     $service_port;
            proxy_set_header       X-Request-ID       $req_id;
            proxy_set_header       Host               $best_http_host;

            set $proxy_upstream_name upstream-default-backend;

            rewrite                (.*) / break;

            proxy_pass            http://upstream_balancer;
            log_by_lua_block {

                monitor.call()

            }
        }

    }
    ## end server _
    ## start server def.test.vij-backend.cloud-platform.xxx.xxx.xxx.uk
    server {
        server_name def.test.vij-backend.cloud-platform.xxx.xxx.xxx.uk ;

        listen 80  ;
        listen 443  ssl http2 ;

        set $proxy_upstream_name "-";

        # PEM sha: 9a5b0fb5afccd279e5cc56a363f521d81184ea2d
        ssl_certificate                         /etc/ingress-controller/ssl/default-fake-certificate.pem;
        ssl_certificate_key                     /etc/ingress-controller/ssl/default-fake-certificate.pem;

        ssl_certificate_by_lua_block {
            certificate.call()
        }

        location @custom_upstream-default-backend_418 {
            internal;

            proxy_intercept_errors off;

            proxy_set_header       X-Code             418;
            proxy_set_header       X-Format           $http_accept;
            proxy_set_header       X-Original-URI     $request_uri;
            proxy_set_header       X-Namespace        $namespace;
            proxy_set_header       X-Ingress-Name     $ingress_name;
            proxy_set_header       X-Service-Name     $service_name;
            proxy_set_header       X-Service-Port     $service_port;
            proxy_set_header       X-Request-ID       $req_id;
            proxy_set_header       Host               $best_http_host;

            set $proxy_upstream_name upstream-default-backend;

            rewrite                (.*) / break;

            proxy_pass            http://upstream_balancer;
            log_by_lua_block {

                monitor.call()

            }
        }

        location @custom_upstream-default-backend_503 {
            internal;

            proxy_intercept_errors off;

            proxy_set_header       X-Code             503;
            proxy_set_header       X-Format           $http_accept;
            proxy_set_header       X-Original-URI     $request_uri;
            proxy_set_header       X-Namespace        $namespace;
            proxy_set_header       X-Ingress-Name     $ingress_name;
            proxy_set_header       X-Service-Name     $service_name;
            proxy_set_header       X-Service-Port     $service_port;
            proxy_set_header       X-Request-ID       $req_id;
            proxy_set_header       Host               $best_http_host;

            set $proxy_upstream_name upstream-default-backend;

            rewrite                (.*) / break;

            proxy_pass            http://upstream_balancer;
            log_by_lua_block {

                monitor.call()

            }
        }

        location @custom_upstream-default-backend_504 {
            internal;

            proxy_intercept_errors off;

            proxy_set_header       X-Code             504;
            proxy_set_header       X-Format           $http_accept;
            proxy_set_header       X-Original-URI     $request_uri;
            proxy_set_header       X-Namespace        $namespace;
            proxy_set_header       X-Ingress-Name     $ingress_name;
            proxy_set_header       X-Service-Name     $service_name;
            proxy_set_header       X-Service-Port     $service_port;
            proxy_set_header       X-Request-ID       $req_id;
            proxy_set_header       Host               $best_http_host;

            set $proxy_upstream_name upstream-default-backend;

            rewrite                (.*) / break;

            proxy_pass            http://upstream_balancer;
            log_by_lua_block {

                monitor.call()

            }
        }

        location / {

            set $namespace      "new-one";
            set $ingress_name   "helloworld-rubyapp-ingress";
            set $service_name   "rubyapp-service";
        set $service_port   "{0 8080 }";
            set $location_path  "/";

            rewrite_by_lua_block {
                lua_ingress.rewrite({
                    force_ssl_redirect = true,
                    use_port_in_redirects = false,
                })
                balancer.rewrite()
                plugins.run()
            }

            header_filter_by_lua_block {

                plugins.run()
            }
            body_filter_by_lua_block {

            }

            log_by_lua_block {

                balancer.log()

                monitor.call()

                plugins.run()
            }

            if ($scheme = https) {
                more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains";
            }

            port_in_redirect off;

            set $balancer_ewma_score -1;
            set $proxy_upstream_name    "new-one-rubyapp-service-8080";
            set $proxy_host             $proxy_upstream_name;
            set $pass_access_scheme $scheme;
            set $pass_server_port $server_port;
            set $best_http_host $http_host;
            set $pass_port $pass_server_port;

            set $proxy_alternative_upstream_name "";

            client_max_body_size                    50m;

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;

            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Request-ID           $req_id;
            proxy_set_header X-Real-IP              $the_real_ip;

            proxy_set_header X-Forwarded-For        $the_real_ip;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;

            proxy_set_header X-Original-URI         $request_uri;

            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_buffering                         off;
            proxy_buffer_size                       16k;
            proxy_buffers                           4 16k;
            proxy_request_buffering                 on;
            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_timeout             0;
            proxy_next_upstream_tries               3;

            # Custom error pages per ingress
            proxy_intercept_errors on;

            error_page 418 = @custom_upstream-default-backend_418;
            error_page 504 = @custom_upstream-default-backend_504;
            error_page 503 = @custom_upstream-default-backend_503;

            proxy_pass http://upstream_balancer;

            proxy_redirect                          off;

        }

        # Custom code snippet configured in the configuration configmap
        if ($scheme != 'https') {
            return 308 https://$host$request_uri;
        }

        location @custom_upstream-default-backend_413 {
            internal;

            proxy_intercept_errors off;

            proxy_set_header       X-Code             413;
            proxy_set_header       X-Format           $http_accept;
            proxy_set_header       X-Original-URI     $request_uri;
            proxy_set_header       X-Namespace        $namespace;
            proxy_set_header       X-Ingress-Name     $ingress_name;
            proxy_set_header       X-Service-Name     $service_name;
            proxy_set_header       X-Service-Port     $service_port;
            proxy_set_header       X-Request-ID       $req_id;
            proxy_set_header       Host               $best_http_host;

            set $proxy_upstream_name upstream-default-backend;

            rewrite                (.*) / break;

            proxy_pass            http://upstream_balancer;
            log_by_lua_block {

                monitor.call()

            }
        }

        location @custom_upstream-default-backend_502 {
            internal;

            proxy_intercept_errors off;

            proxy_set_header       X-Code             502;
            proxy_set_header       X-Format           $http_accept;
            proxy_set_header       X-Original-URI     $request_uri;
            proxy_set_header       X-Namespace        $namespace;
            proxy_set_header       X-Ingress-Name     $ingress_name;
            proxy_set_header       X-Service-Name     $service_name;
            proxy_set_header       X-Service-Port     $service_port;
            proxy_set_header       X-Request-ID       $req_id;
            proxy_set_header       Host               $best_http_host;

            set $proxy_upstream_name upstream-default-backend;

            rewrite                (.*) / break;

            proxy_pass            http://upstream_balancer;
            log_by_lua_block {

                monitor.call()

            }
        }

        location @custom_upstream-default-backend_503 {
            internal;

            proxy_intercept_errors off;

            proxy_set_header       X-Code             503;
            proxy_set_header       X-Format           $http_accept;
            proxy_set_header       X-Original-URI     $request_uri;
            proxy_set_header       X-Namespace        $namespace;
            proxy_set_header       X-Ingress-Name     $ingress_name;
            proxy_set_header       X-Service-Name     $service_name;
            proxy_set_header       X-Service-Port     $service_port;
            proxy_set_header       X-Request-ID       $req_id;
            proxy_set_header       Host               $best_http_host;

            set $proxy_upstream_name upstream-default-backend;

            rewrite                (.*) / break;

            proxy_pass            http://upstream_balancer;
            log_by_lua_block {

                monitor.call()

            }
        }

        location @custom_upstream-default-backend_504 {
            internal;

            proxy_intercept_errors off;

            proxy_set_header       X-Code             504;
            proxy_set_header       X-Format           $http_accept;
            proxy_set_header       X-Original-URI     $request_uri;
            proxy_set_header       X-Namespace        $namespace;
            proxy_set_header       X-Ingress-Name     $ingress_name;
            proxy_set_header       X-Service-Name     $service_name;
            proxy_set_header       X-Service-Port     $service_port;
            proxy_set_header       X-Request-ID       $req_id;
            proxy_set_header       Host               $best_http_host;

            set $proxy_upstream_name upstream-default-backend;

            rewrite                (.*) / break;

            proxy_pass            http://upstream_balancer;
            log_by_lua_block {

                monitor.call()

            }
        }

    }
    ## end server def.test.vij-backend.cloud-platform.xxx.xxx.xxx.uk
vijay-veeranki commented 5 years ago

latest log

nginx-ingress-acme-controller-d76447f4c-6cll4 nginx-ingress-controller { "time": "2019-09-17T18:04:12+00:00", "body_bytes_sent": "6592", "bytes_sent": 6681, "gzip_ratio": "", "http_host": "def.test.vij-backend.cloud-platform.xxx.xxx.xxx.uk", "http_referer": "", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36", "http_x_real_ip": "", "http_x_forwarded_for": "", "http_x_forwarded_proto": "", "kubernetes_namespace": "new-one", "kubernetes_ingress_name": "helloworld-rubyapp-ingress", "kubernetes_service_name": "rubyapp-service", "kubernetes_service_port": "{0 8080 }", "proxy_upstream_name": "upstream-default-backend", "proxy_protocol_addr": "", "real_ip": "85.255.237.179", "remote_addr": "85.255.237.179", "remote_user": "", "request_id": "c0bade4e506466ae67635fdf28ae9bbb", "request_length": "30", "request_method": "GET", "request_path": "/", "request_proto": "HTTP/2.0", "request_query": "code=503", "request_time": "0.001", "request_uri": "/err?code=503", "response_http_location": "", "server_name": "def.test.vij-backend.cloud-platform.xxx.xxx.xxx.uk", "server_port": "443", "ssl_cipher": "ECDHE-RSA-AES256-GCM-SHA384", "ssl_client_s_dn": "", "ssl_protocol": "TLSv1.2", "ssl_session_id": "", "status": "503", "upstream_addr": "100.96.4.85:8080 : 100.96.4.75:8080", "upstream_response_length": "0 : 6592", "upstream_response_time": "0.000 : 0.000", "upstream_status": "503 : 503" }

jasongwartz commented 5 years ago

Is your “nginx-errors” service in the same namespace as the ingress object?

vijay-veeranki commented 5 years ago

Is your “nginx-errors” service in the same namespace as the ingress object?

Yes it is in the same namespace

╰─ ku -n new-one get all
NAME                                      READY   STATUS    RESTARTS   AGE
pod/helloworld-rubyapp-5ff9d8456d-mmg8n   1/1     Running   0          4h19m
pod/nginx-errors-fb9974bbb-g2sqf          1/1     Running   0          116m

NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/nginx-errors      ClusterIP   100.67.xx.xx   <none>        80/TCP    4h19m
service/rubyapp-service   ClusterIP   100.68.xx.xx   <none>        80/TCP    4h19m

NAME                                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/helloworld-rubyapp   1         1         1            1           4h19m
deployment.apps/nginx-errors         1         1         1            1           4h19m

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/helloworld-rubyapp-5ff9d8456d   1         1         1       4h19m
replicaset.apps/nginx-errors-fb9974bbb          1         1         1       4h19m
vijay-veeranki commented 5 years ago

@jasongwartz I restarted the deployment.apps/nginx-errors, then it worked as expected by getting error page from custom-default backend, can see that in the below logs

nginx-ingress-acme-controller-d76447f4c-88vch nginx-ingress-controller { "time": "2019-09-17T21:29:39+00:00", "body_bytes_sent": "8", "bytes_sent": 340, "gzip_ratio": "", "http_host": "def.test.vij-backend.cloud-platform.xxx.xxx.xxx.uk", "http_referer": "", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36", "http_x_real_ip": "", "http_x_forwarded_for": "", "http_x_forwarded_proto": "", "kubernetes_namespace": "new-one", "kubernetes_ingress_name": "helloworld-rubyapp-ingress", "kubernetes_service_name": "rubyapp-service", "kubernetes_service_port": "{0 8080 }", "proxy_upstream_name": "custom-default-backend-nginx-errors", "proxy_protocol_addr": "", "real_ip": "86.174.121.13", "remote_addr": "86.174.121.13", "remote_user": "", "request_id": "d5afa65acadcdfb3c85998ca8823f52a", "request_length": "416", "request_method": "GET", "request_path": "/", "request_proto": "HTTP/2.0", "request_query": "code=503", "request_time": "0.003", "request_uri": "/err?code=503", "response_http_location": "", "server_name": "def.test.vij-backend.cloud-platform.xxx.xxx.xx.uk", "server_port": "443", "ssl_cipher": "ECDHE-RSA-AES256-GCM-SHA384", "ssl_client_s_dn": "", "ssl_protocol": "TLSv1.2", "ssl_session_id": "", "status": "503", "upstream_addr": "100.96.4.90:8080 : 100.96.4.91:8080", "upstream_response_length": "0 : 8", "upstream_response_time": "0.000 : 0.000", "upstream_status": "503 : 503" }

With in couple of minutes, it changed to serve from cluster level default backend, which is strange behaviour?

{ "time": "2019-09-17T21:32:45+00:00", "body_bytes_sent": "6592", "bytes_sent": 6681, "gzip_ratio": "", "http_host": "def.test.vij-backend.cloud-platform.xxx.xxx.xxx.uk", "http_referer": "", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36", "http_x_real_ip": "", "http_x_forwarded_for": "", "http_x_forwarded_proto": "", "kubernetes_namespace": "new-one", "kubernetes_ingress_name": "helloworld-rubyapp-ingress", "kubernetes_service_name": "rubyapp-service", "kubernetes_service_port": "{0 8080 }", "proxy_upstream_name": "upstream-default-backend", "proxy_protocol_addr": "", "real_ip": "86.174.121.13", "remote_addr": "86.174.121.13", "remote_user": "", "request_id": "e5302568e0930fd8b53d9d4cc0863ae1", "request_length": "30", "request_method": "GET", "request_path": "/", "request_proto": "HTTP/2.0", "request_query": "code=503", "request_time": "0.003", "request_uri": "/err?code=503", "response_http_location": "", "server_name": "def.test.vij-backend.cloud-platform.xx.xx.xx.uk", "server_port": "443", "ssl_cipher": "ECDHE-RSA-AES256-GCM-SHA384", "ssl_client_s_dn": "", "ssl_protocol": "TLSv1.2", "ssl_session_id": "", "status": "503", "upstream_addr": "100.96.4.90:8080 : 100.96.4.75:8080", "upstream_response_length": "0 : 6592", "upstream_response_time": "0.000 : 0.000", "upstream_status": "503 : 503" }
vijay-veeranki commented 5 years ago

Hi @jasongwartz I can see it is a set as below in my config for all the custom errors, for my ingress which have a backend service and custom error annotation, which is not right i believe??

        ```set $proxy_upstream_name upstream-default-backend;```
katobr commented 5 years ago

Hi @vijay-veeranki-moj ,

My ingress behave exactly the same. Sometimes it starts working as expected, but within couple of minutes falls back to upstream-default-backend. It happens when controller by some reason decides to regenerate and reload nginx config. When it does, my custom default backend disappears from config and is being replaced by upstream-default-backend.

vijay-veeranki commented 5 years ago

Thanks @katobr for sharing your experience.

I am looking at my logs below, i can see backend reload is happening 10-15 min and which is causing to reload the config nginx.conf, which might be causing the issue?

Any thoughts @jasongwartz on how this can be fixed?

nginx-ingress-d76447f4c nginx-ingress-controller W0918 13:36:54.014164       8 controller.go:878] Service "two-second/rubyapp-service" does not have any active Endpoint.
nginx-ingress-d76447f4c nginx-ingress-controller I0918 13:36:54.014415       8 controller.go:133] Configuration changes detected, backend reload required.
nginx-ingress-d76447f4c nginx-ingress-controller I0918 13:36:54.130326       8 controller.go:149] Backend successfully reloaded.
nginx-ingress-d76447f4c nginx-ingress-controller W0918 13:37:43.009470       8 controller.go:878] Service "two-second/rubyapp-service" does not have any active Endpoint.
nginx-ingress-d76447f4c nginx-ingress-controller W0918 13:47:16.257234       8 controller.go:878] Service "two-second/rubyapp-service" does not have any active Endpoint.
nginx-ingress-d76447f4c nginx-ingress-controller I0918 13:47:16.257380       8 controller.go:133] Configuration changes detected, backend reload required.
nginx-ingress-d76447f4c nginx-ingress-controller I0918 13:47:16.350087       8 controller.go:149] Backend successfully reloaded.
nginx-ingress-d76447f4c nginx-ingress-controller W0918 13:47:55.683792       8 controller.go:878] Service "two-second/rubyapp-service" does not have any active Endpoint.
nginx-ingress-d76447f4c nginx-ingress-controller W0918 14:02:01.386580       8 controller.go:878] Service "two-second/rubyapp-service" does not have any active Endpoint.
nginx-ingress-d76447f4c nginx-ingress-controller I0918 14:02:01.386742       8 controller.go:133] Configuration changes detected, backend reload required.
nginx-ingress-d76447f4c nginx-ingress-controller I0918 14:02:01.474117       8 controller.go:149] Backend successfully reloaded.
nginx-ingress-d76447f4c nginx-ingress-controller W0918 14:02:10.759675       8 controller.go:878] Service "two-second/rubyapp-service" does not have any active Endpoint.
nginx-ingress-d76447f4c nginx-ingress-controller I0918 14:02:10.759837       8 controller.go:133] Configuration changes detected, backend reload required.
nginx-ingress-d76447f4c nginx-ingress-controller I0918 14:02:10.895777       8 controller.go:149] Backend successfully reloaded.

and looking at the config:

www-data@nginx-ingress-acme-controller-d76447f4c-6jb6j:/etc/nginx$ ls -la
total 112
drwxr-xr-x 1 www-data www-data  4096 Aug 14 20:02 .
drwxr-xr-x 1 www-data www-data  4096 Sep 17 22:45 ..
-rw-r--r-- 1 root     root      1007 Aug 14 20:02 fastcgi_params
drwxr-xr-x 1 www-data www-data  4096 Aug 13 20:34 geoip
drwxr-xr-x 6 www-data www-data  4096 Aug 14 20:02 lua
-rw-r--r-- 1 root     root      5231 Aug 14 20:02 mime.types
drwxr-xr-x 2 www-data www-data  4096 Aug 13 20:38 modsecurity
lrwxrwxrwx 1 root     root        34 Aug 14 20:02 modules -> /usr/local/openresty/nginx/modules
-rw-r--r-- 1 www-data www-data 68713 Sep 18 14:02 nginx.conf
-rw-r--r-- 1 www-data www-data     2 Aug 14 20:02 opentracing.json
drwxr-xr-x 6 www-data www-data  4096 Aug 13 20:39 owasp-modsecurity-crs
drwxr-xr-x 2 www-data www-data  4096 Aug 14 20:02 template
jasongwartz commented 5 years ago

If you delete the ingress controller pod (forcing it to restart), does the new pod come up with the correct config?

without diving back into the source code of ingress-nginx (which I haven't looked at for several months), I'm not sure what might be causing the default backend to be changing on a reload minutes after the ingress controller comes up. apologies that I can't be more helpful.

vijay-veeranki commented 5 years ago

Hi @jasongwartz Thanks for your reply, as suggested i restarted ingress controller pod, the config is same, no change

Still i can see, it is set as. set $proxy_upstream_name upstream-default-backend;

katobr commented 5 years ago

If you delete the ingress controller pod (forcing it to restart), does the new pod come up with the correct config?

Yes, it does. But in a few seconds controller decides to reload config and comes up with upstream-default-backend. Cut from logs (diff config):

I0918 08:51:21.035055       6 nginx.go:709] NGINX configuration diff:
--- /etc/nginx/nginx.conf       2019-09-18 08:51:17.745151892 +0000
+++ /tmp/new-nginx-cfg383168094 2019-09-18 08:51:21.029196557 +0000
@@ -1,5 +1,5 @@

-# Configuration checksum: 3635007615741738390
+# Configuration checksum: 10146389228757143632

 # setup custom paths that do not require root access
 pid /tmp/nginx.pid;
@@ -2883,7 +2883,7 @@

                set $proxy_upstream_name "-";

-               location @custom_custom-default-backend-dev-qa-site-db_404 {
+               location @custom_upstream-default-backend_404 {
                        internal;

                        proxy_intercept_errors off;
@@ -2898,7 +2898,7 @@
                        proxy_set_header       X-Request-ID       $req_id;
                        proxy_set_header       Host               $best_http_host;

-                       set $proxy_upstream_name custom-default-backend-dev-qa-site-db;
+                       set $proxy_upstream_name upstream-default-backend;

                        rewrite                (.*) / break;

@@ -3113,7 +3113,7 @@
                        # Custom error pages per ingress
                        proxy_intercept_errors on;

-                       error_page 404 = @custom_custom-default-backend-dev-qa-site-db_404;
+                       error_page 404 = @custom_upstream-default-backend_404;

                        proxy_pass http://upstream_balancer;

I0918 08:51:21.058431       6 controller.go:149] Backend successfully reloaded.
I0918 08:51:21.059451       6 controller.go:172] Dynamic reconfiguration succeeded.
vijay-veeranki commented 5 years ago

@katobr , there is a fix for it, can you please try it as well? https://github.com/kubernetes/ingress-nginx/issues/4576#issuecomment-535672499