kubernetes / ingress-nginx

Ingress NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.52k stars 8.26k forks source link

Kubernetes Ingress Exact not prioritized over Prefix #9405

Open JonasJes opened 1 year ago

JonasJes commented 1 year ago

What happened:

In Kubernetes we need a new service to handle the root path, but but still a catch everything else on our current frontend. But it looks like Exact is not prioritized over Prefix as the documentation say it should

If two paths are still equally matched, precedence will be given to paths with an exact path type over prefix path type.

https://kubernetes.io/docs/concepts/services-networking/ingress/#multiple-matches

What you expected to happen:

When 2 ingresses has a path with / it is expected that the ingress hits the service with pathType: Exact and not the service with pathType: Prefix

NGINX Ingress controller version

NGINX Ingress controller Release: v1.1.1 Build: a17181e43ec85534a6fea968d95d019c5a4bc8cf Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.19.9

Kubernetes version Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"windows/amd64"} Kustomize Version: v4.5.4 Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.8", GitCommit:"83d00b7cbf10e530d1d4b2403f22413220c37621", GitTreeState:"clean", BuildDate:"2022-11-09T19:50:11Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"}

Environment:

Name: production-ingress Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=production-ingress app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.2.0 helm.sh/chart=ingress-nginx-4.1.2 Annotations: meta.helm.sh/release-name: production-ingress meta.helm.sh/release-namespace: production Controller: k8s.io/ingress-nginx Events:

Name: staging-ingress Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=staging-ingress app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.2.0 helm.sh/chart=ingress-nginx-4.1.0 Annotations: meta.helm.sh/release-name: staging-ingress meta.helm.sh/release-namespace: staging Controller: k8s.io/ingress-nginx Events:


**How to reproduce this issue**:
1. Have a Kubernetes cluster
2. Have 2 pods and services running in the Kubernetes cluster
3. Deploy following yaml content

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: current-frontend labels: app: current-frontend tier: frontend annotations: kubernetes.io/ingress.class: nginx spec: tls:


apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: new-service labels: app: new-service tier: frontend annotations: kubernetes.io/ingress.class: nginx spec: tls:

longwuyuan commented 1 year ago

/remove-kind bug

Please post the commands and outputs like ;

JonasJes commented 1 year ago

Hi @longwuyuan

Unfortunately, I am prohibited to disclose that kind of in-depth information about our solution publicly. Is there anything, in particular, you are interested in?

Other Detail

I got something working even If it doesn't seem to be the optimal solution (According to the Documentation)

I changed the Current Frontend to use Regex /(.+) as I wanted at least one char after the slash for it to be hit.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: current-frontend
  labels:
    app: current-frontend
    tier: frontend
  annotations:
    nginx.ingress.kubernetes.io/use-regex: "true"
    kubernetes.io/ingress.class: nginx
spec:
  tls:
    - hosts:
      - my.domain.com
      secretName: tls-secret
  rules:
    - host: my.domain.com
      http:
        paths:
          - backend:
              service:
                name: current-frontend
                port:
                  number: 80
            path: /(.+)
            pathType: Prefix

At the same time, I needed to change the new-service to use Prefix instead of Exact, as it does not look like Exact work at all. Even with the fix to the current-frontend I hit the default backend without Prefix

strongjz commented 1 year ago

I tested this with HEAD and not 1.1.1

The documentation says that Prefix on / will match all request paths,

Kind Path(s) Request path(s) Matches?
Prefix / (all paths) Yes

https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types

Here is what the nginx.conf looks like with defaults and the provided yaml you can see that / is set twice with / and =/ so it should be going to the new service. I agree that / should go to new-service and default.

This contradicts the kubernetes documentation but agrees with the nginx docs.

If I understand it all correctly, this is a bug.

/triage accepted /kind bug /priority backlog

https://nginx.org/en/docs/http/ngx_http_core_module.html#location

    ## start server my.domain.com
    server {
        server_name my.domain.com ;

        listen 80  ;
        listen [::]:80  ;
        listen 443  ssl http2 ;
        listen [::]:443  ssl http2 ;

        set $proxy_upstream_name "-";

        ssl_certificate_by_lua_block {
            certificate.call()
        }

        location /someendpoint {

            set $namespace      "default";
            set $ingress_name   "new-service";
            set $service_name   "new-service";
            set $service_port   "80";
            set $location_path  "/someendpoint";
            set $global_rate_limit_exceeding n;

            rewrite_by_lua_block {
                lua_ingress.rewrite({
                    force_ssl_redirect = false,
                    ssl_redirect = true,
                    force_no_ssl_redirect = false,
                    preserve_trailing_slash = false,
                    use_port_in_redirects = false,
                    global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
                })
                balancer.rewrite()
                plugins.run()
            }

            # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
            # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
            # other authentication method such as basic auth or external auth useless - all requests will be allowed.
            #access_by_lua_block {
            #}

            header_filter_by_lua_block {
                lua_ingress.header()
                plugins.run()
            }

            body_filter_by_lua_block {
                plugins.run()
            }

            log_by_lua_block {
                balancer.log()

                monitor.call()

                plugins.run()
            }

            port_in_redirect off;

            set $balancer_ewma_score -1;
            set $proxy_upstream_name "default-new-service-80";
            set $proxy_host          $proxy_upstream_name;
            set $pass_access_scheme  $scheme;

            set $pass_server_port    $server_port;

            set $best_http_host      $http_host;
            set $pass_port           $pass_server_port;

            set $proxy_alternative_upstream_name "";

            client_max_body_size                    1m;

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;

            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Request-ID           $req_id;
            proxy_set_header X-Real-IP              $remote_addr;

            proxy_set_header X-Forwarded-For        $remote_addr;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;

            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_buffering                         off;
            proxy_buffer_size                       4k;
            proxy_buffers                           4 4k;

            proxy_max_temp_file_size                1024m;

            proxy_request_buffering                 on;
            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_timeout             0;
            proxy_next_upstream_tries               3;

            proxy_pass http://upstream_balancer;

            proxy_redirect                          off;

        }

        location / {

            set $namespace      "default";
            set $ingress_name   "current-frontend";
            set $service_name   "current-frontend";
            set $service_port   "80";
            set $location_path  "/";
            set $global_rate_limit_exceeding n;

            rewrite_by_lua_block {
                lua_ingress.rewrite({
                    force_ssl_redirect = false,
                    ssl_redirect = true,
                    force_no_ssl_redirect = false,
                    preserve_trailing_slash = false,
                    use_port_in_redirects = false,
                    global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
                })
                balancer.rewrite()
                plugins.run()
            }

            # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
            # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
            # other authentication method such as basic auth or external auth useless - all requests will be allowed.
            #access_by_lua_block {
            #}

            header_filter_by_lua_block {
                lua_ingress.header()
                plugins.run()
            }

            body_filter_by_lua_block {
                plugins.run()
            }

            log_by_lua_block {
                balancer.log()

                monitor.call()

                plugins.run()
            }

            port_in_redirect off;

            set $balancer_ewma_score -1;
            set $proxy_upstream_name "default-current-frontend-80";
            set $proxy_host          $proxy_upstream_name;
            set $pass_access_scheme  $scheme;

            set $pass_server_port    $server_port;

            set $best_http_host      $http_host;
            set $pass_port           $pass_server_port;

            set $proxy_alternative_upstream_name "";

            client_max_body_size                    1m;

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;

            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Request-ID           $req_id;
            proxy_set_header X-Real-IP              $remote_addr;

            proxy_set_header X-Forwarded-For        $remote_addr;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;

            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_buffering                         off;
            proxy_buffer_size                       4k;
            proxy_buffers                           4 4k;

            proxy_max_temp_file_size                1024m;

            proxy_request_buffering                 on;
            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_timeout             0;
            proxy_next_upstream_tries               3;

            proxy_pass http://upstream_balancer;

            proxy_redirect                          off;

        }

        location = / {

            set $namespace      "default";
            set $ingress_name   "new-service";
            set $service_name   "new-service";
            set $service_port   "80";
            set $location_path  "/";
            set $global_rate_limit_exceeding n;

            rewrite_by_lua_block {
                lua_ingress.rewrite({
                    force_ssl_redirect = false,
                    ssl_redirect = true,
                    force_no_ssl_redirect = false,
                    preserve_trailing_slash = false,
                    use_port_in_redirects = false,
                    global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
                })
                balancer.rewrite()
                plugins.run()
            }

            # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
            # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
            # other authentication method such as basic auth or external auth useless - all requests will be allowed.
            #access_by_lua_block {
            #}

            header_filter_by_lua_block {
                lua_ingress.header()
                plugins.run()
            }

            body_filter_by_lua_block {
                plugins.run()
            }

            log_by_lua_block {
                balancer.log()

                monitor.call()

                plugins.run()
            }

            port_in_redirect off;

            set $balancer_ewma_score -1;
            set $proxy_upstream_name "default-new-service-80";
            set $proxy_host          $proxy_upstream_name;
            set $pass_access_scheme  $scheme;

            set $pass_server_port    $server_port;

            set $best_http_host      $http_host;
            set $pass_port           $pass_server_port;

            set $proxy_alternative_upstream_name "";

            client_max_body_size                    1m;

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;

            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Request-ID           $req_id;
            proxy_set_header X-Real-IP              $remote_addr;

            proxy_set_header X-Forwarded-For        $remote_addr;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;

            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_buffering                         off;
            proxy_buffer_size                       4k;
            proxy_buffers                           4 4k;

            proxy_max_temp_file_size                1024m;

            proxy_request_buffering                 on;
            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_timeout             0;
            proxy_next_upstream_tries               3;

            proxy_pass http://upstream_balancer;

            proxy_redirect                          off;

        }

    }
    ## end server my.domain.com
strongjz commented 1 year ago

Doesnt llke there is a major diff between an nginx conf from HEAD and 1.1.1

diff nginx.conf nginx-1.1.1.conf 
2c2
< # Configuration checksum: 8334729917275961904
---
> # Configuration checksum: 3918746018856732406
5c5
< pid /tmp/nginx/nginx.pid;
---
> pid /tmp/nginx.pid;
19d18
<       
131c130
<       keepalive_requests 1000;
---
>       keepalive_requests 100;
133,136c132,135
<       client_body_temp_path           /tmp/nginx/client-body;
<       fastcgi_temp_path               /tmp/nginx/fastcgi-temp;
<       proxy_temp_path                 /tmp/nginx/proxy-temp;
<       ajp_temp_path                   /tmp/nginx/ajp-temp;
---
>       client_body_temp_path           /tmp/client-body;
>       fastcgi_temp_path               /tmp/fastcgi-temp;
>       proxy_temp_path                 /tmp/proxy-temp;
>       ajp_temp_path                   /tmp/ajp-temp;
244c243
<       # PEM sha: 912cdd239cb6b42f49e85846f0844cfa6cbc9a57
---
>       # PEM sha: 1e973a00a5bfa7141b6ce72dfb0ba7748b4c3c78
269c268
<               keepalive_time 1h;
---
>               
276c275
<       proxy_cache_path /tmp/nginx/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off;
---
>       proxy_cache_path /tmp/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off;
456c455,573
<               location /someendpoint {
---
>               location /someendpoint/ {
>                       
>                       set $namespace      "default";
>                       set $ingress_name   "new-service";
>                       set $service_name   "new-service";
>                       set $service_port   "80";
>                       set $location_path  "/someendpoint";
>                       set $global_rate_limit_exceeding n;
>                       
>                       rewrite_by_lua_block {
>                               lua_ingress.rewrite({
>                                       force_ssl_redirect = false,
>                                       ssl_redirect = true,
>                                       force_no_ssl_redirect = false,
>                                       preserve_trailing_slash = false,
>                                       use_port_in_redirects = false,
>                                       global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
>                               })
>                               balancer.rewrite()
>                               plugins.run()
>                       }
>                       
>                       # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
>                       # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
>                       # other authentication method such as basic auth or external auth useless - all requests will be allowed.
>                       #access_by_lua_block {
>                       #}
>                       
>                       header_filter_by_lua_block {
>                               lua_ingress.header()
>                               plugins.run()
>                       }
>                       
>                       body_filter_by_lua_block {
>                               plugins.run()
>                       }
>                       
>                       log_by_lua_block {
>                               balancer.log()
>                               
>                               monitor.call()
>                               
>                               plugins.run()
>                       }
>                       
>                       port_in_redirect off;
>                       
>                       set $balancer_ewma_score -1;
>                       set $proxy_upstream_name "default-new-service-80";
>                       set $proxy_host          $proxy_upstream_name;
>                       set $pass_access_scheme  $scheme;
>                       
>                       set $pass_server_port    $server_port;
>                       
>                       set $best_http_host      $http_host;
>                       set $pass_port           $pass_server_port;
>                       
>                       set $proxy_alternative_upstream_name "";
>                       
>                       client_max_body_size                    1m;
>                       
>                       proxy_set_header Host                   $best_http_host;
>                       
>                       # Pass the extracted client certificate to the backend
>                       
>                       # Allow websocket connections
>                       proxy_set_header                        Upgrade           $http_upgrade;
>                       
>                       proxy_set_header                        Connection        $connection_upgrade;
>                       
>                       proxy_set_header X-Request-ID           $req_id;
>                       proxy_set_header X-Real-IP              $remote_addr;
>                       
>                       proxy_set_header X-Forwarded-For        $remote_addr;
>                       
>                       proxy_set_header X-Forwarded-Host       $best_http_host;
>                       proxy_set_header X-Forwarded-Port       $pass_port;
>                       proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
>                       proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;
>                       
>                       proxy_set_header X-Scheme               $pass_access_scheme;
>                       
>                       # Pass the original X-Forwarded-For
>                       proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
>                       
>                       # mitigate HTTPoxy Vulnerability
>                       # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
>                       proxy_set_header Proxy                  "";
>                       
>                       # Custom headers to proxied server
>                       
>                       proxy_connect_timeout                   5s;
>                       proxy_send_timeout                      60s;
>                       proxy_read_timeout                      60s;
>                       
>                       proxy_buffering                         off;
>                       proxy_buffer_size                       4k;
>                       proxy_buffers                           4 4k;
>                       
>                       proxy_max_temp_file_size                1024m;
>                       
>                       proxy_request_buffering                 on;
>                       proxy_http_version                      1.1;
>                       
>                       proxy_cookie_domain                     off;
>                       proxy_cookie_path                       off;
>                       
>                       # In case of errors try the next upstream server before returning an error
>                       proxy_next_upstream                     error timeout;
>                       proxy_next_upstream_timeout             0;
>                       proxy_next_upstream_tries               3;
>                       
>                       proxy_pass http://upstream_balancer;
>                       
>                       proxy_redirect                          off;
>                       
>               }
>               
>               location = /someendpoint {
880,881d996
<       
<       resolver 10.96.0.10 valid=30s;
gecgooden commented 1 year ago

I've been looking into this issue in hopes of fixing this bug, but I'm not able to reproduce it. @strongjz @JonasJes Are you able to provide any more information to help me reproduce this issue consistently?

I've tried running this against both v1.1.1 and the current main (04b4f9cf425b72b7b68e01b98f91fba5d24065d3)

The steps I've taken are:

  1. Checkout the repo at the commit under test
  2. Deploy a test cluster with make dev-env
  3. Create example applications with the following manifests:
    
    kind: Service
    metadata:
    name: new-service
    labels:
    app: new-service
    spec:
    ports:
    - port: 80
    targetPort: 80
    selector:
    app: new-service

apiVersion: apps/v1 kind: Deployment metadata: name: new-service labels: app: new-service spec: replicas: 1 selector: matchLabels: app: new-service template: metadata: labels: app: new-service spec: containers:


apiVersion: v1 kind: Service metadata: name: current-frontend labels: app: current-frontend spec: ports:


apiVersion: apps/v1 kind: Deployment metadata: name: current-frontend labels: app: current-frontend spec: replicas: 1 selector: matchLabels: app: current-frontend template: metadata: labels: app: current-frontend spec: containers:

Pod Information: node name: ingress-nginx-dev-control-plane pod name: current-frontend-75bc7d6899-d6466 pod namespace: default pod IP: 10.244.0.9

Server values: server_version=nginx: 1.21.6 - lua: 10021

Request Information: client_address=10.244.0.7 method=GET real path=/ query= request_version=1.1 request_scheme=http request_uri=http://my.domain.com:80/

Request Headers: accept=/ host=my.domain.com user-agent=curl/7.85.0 x-forwarded-for=172.18.0.1 x-forwarded-host=my.domain.com x-forwarded-port=80 x-forwarded-proto=http x-forwarded-scheme=http x-real-ip=172.18.0.1 x-request-id=88d630028b2203f339b655a487b9d828 x-scheme=http

Request Body: -no body in request-

6. Deploy the following `new-service` Ingress resource:

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: new-service labels: app: new-service tier: frontend annotations: kubernetes.io/ingress.class: nginx spec: rules:

Pod Information: node name: ingress-nginx-dev-control-plane pod name: new-service-bdc877767-59498 pod namespace: default pod IP: 10.244.0.8

Server values: server_version=nginx: 1.21.6 - lua: 10021

Request Information: client_address=10.244.0.7 method=GET real path=/ query= request_version=1.1 request_scheme=http request_uri=http://my.domain.com:80/

Request Headers: accept=/ host=my.domain.com user-agent=curl/7.85.0 x-forwarded-for=172.18.0.1 x-forwarded-host=my.domain.com x-forwarded-port=80 x-forwarded-proto=http x-forwarded-scheme=http x-real-ip=172.18.0.1 x-request-id=0ea815121646e51a80fe4059cae5a715 x-scheme=http

Request Body: -no body in request-

8. Running `curl http://my.domain.com/someendpoint` shows that the request was serviced by the `new-service` pod

Hostname: new-service-bdc877767-59498

Pod Information: node name: ingress-nginx-dev-control-plane pod name: new-service-bdc877767-59498 pod namespace: default pod IP: 10.244.0.8

Server values: server_version=nginx: 1.21.6 - lua: 10021

Request Information: client_address=10.244.0.7 method=GET real path=/someendpoint query= request_version=1.1 request_scheme=http request_uri=http://my.domain.com:80/someendpoint

Request Headers: accept=/ host=my.domain.com user-agent=curl/7.85.0 x-forwarded-for=172.18.0.1 x-forwarded-host=my.domain.com x-forwarded-port=80 x-forwarded-proto=http x-forwarded-scheme=http x-real-ip=172.18.0.1 x-request-id=064ce622620c65960fa03aae9b3d5685 x-scheme=http

Request Body: -no body in request-

9. Running `curl http://my.domain.com/somethingelse` shows that this request is still being serviced by the `current-frontend` pod:

Hostname: current-frontend-75bc7d6899-d6466

Pod Information: node name: ingress-nginx-dev-control-plane pod name: current-frontend-75bc7d6899-d6466 pod namespace: default pod IP: 10.244.0.9

Server values: server_version=nginx: 1.21.6 - lua: 10021

Request Information: client_address=10.244.0.7 method=GET real path=/somethingelse query= request_version=1.1 request_scheme=http request_uri=http://my.domain.com:80/somethingelse

Request Headers: accept=/ host=my.domain.com user-agent=curl/7.85.0 x-forwarded-for=172.18.0.1 x-forwarded-host=my.domain.com x-forwarded-port=80 x-forwarded-proto=http x-forwarded-scheme=http x-real-ip=172.18.0.1 x-request-id=cbfa6290511cc4a833924352c3ee363b x-scheme=http

Request Body: -no body in request-

andrewstec commented 1 year ago

I can re-create this issue on my kubernetes cluster. I believe this issue ONLY occurs when using the / path with "Exact". All other possible route names work. To re-create, you can use the same ingress too. Here is (1.) an implementation that uses the / path and doesn't work, and (2.) an implementation that uses the /marketing path and does work.

/ path does NOT work:

  rules:
    - host: tst.yourdomain.org
      http:
        paths:

          # Exact match for root path
          - path: /
            pathType: Exact
            backend:
              service:
                name: "react-marketing"
                port:
                  number: 80

          # Fallback to react-container for /not-found page
          - path: /
            pathType: Prefix
            backend:
              service:
                name: "react-container"
                port:
                  number: 80

/marketing path works:

  rules:
    - host: tst.yourdomain.org
      http:
        paths:

          # Exact match for root path
          - path: /marketing
            pathType: Exact
            backend:
              service:
                name: "react-marketing"
                port:
                  number: 80

          # Fallback to react-container for /not-found page
          - path: /
            pathType: Prefix
            backend:
              service:
                name: "react-container"
                port:
                  number: 80

Is this behaviour for the / route a bug or expected?

https://kubernetes.io/docs/concepts/services-networking/ingress/

I don't see this basic combination on the documentation link above, but I see lots of other examples that work when I implement them. Perhaps it was unintentionally missed in testing or intentionally skipped for reasons I do not understand.

As a workaround for SPAs that require the / EXACT path and a / PREFIX path as a fall-back for a 404 page etc., there is this workaround. The solution works for SPAs: https://stackoverflow.com/questions/74770691/kubernetes-ingress-exact-not-prioritized-over-prefix.

andrewstec commented 8 months ago

This issue has been around since 2022. Do you think we could have an update on the status of this if possible? I see it was accepted for triage and is in the priority backlog. Thanks!

longwuyuan commented 2 months ago

It is suspected a bug so any contributing PRs would get reviewed.