kubernetes / ingress-nginx

Ingress-NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.3k stars 8.21k forks source link

Changing nginx.ingress.kubernetes.io/auth-tls-match-cn value is ignored #10915

Closed martinbfrey closed 2 months ago

martinbfrey commented 8 months ago

What happened:

We run an ingress with client certificate check.

annotations:
    nginx.ingress.kubernetes.io/auth-tls-secret: rcm-planner-backend-int/mud-ca-cert
    nginx.ingress.kubernetes.io/auth-tls-verify-client: 'on'
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: '1'
    nginx.ingress.kubernetes.io/auth-tls-match-cn: 'CN=api-haproxy-rcm-planner-int'

Clients with a certifcate matching the CN can access the ingress, clients with another CN or no certificate can't access - as expected. If we change the value of nginx.ingress.kubernetes.io/auth-tls-match-cn, the clients with the now not matching CN can still access. Clients with the new, matching CN don't have access. It looks like the Ingress is ignoring changes of the nginx.ingress.kubernetes.io/auth-tls-match-cn value. After a controller restart, the ingress works as expected. The changed annotations look like:

annotations:
    nginx.ingress.kubernetes.io/auth-tls-secret: rcm-planner-backend-int/mud-ca-cert
    nginx.ingress.kubernetes.io/auth-tls-verify-client: 'on'
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: '1'
    nginx.ingress.kubernetes.io/auth-tls-match-cn: 'CN=NOMATCHapi-haproxy-rcm-planner-int'

What you expected to happen:

Changes of nginx.ingress.kubernetes.io/auth-tls-match-cn are used by the ingress without controller restart.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.9.5
  Build:         f503c4bb5fa7d857ad29e94970eb550c2bc00b7c
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.21.6

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

Client Version: v1.28.6
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.6

Environment:

ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8" ORACLE_BUGZILLA_PRODUCT_VERSION=8.9 ORACLE_SUPPORT_PRODUCT="Oracle Linux" ORACLE_SUPPORT_PRODUCT_VERSION=8.9

- **Kernel** (e.g. `uname -a`):

Linux kint-m01 4.18.0-513.9.1.el8_9.x86_64 #1 SMP Thu Nov 30 15:31:16 PST 2023 x86_64 x86_64 x86_64 GNU/Linux

- **Install tools**:
  -  kubeadm
- **Basic cluster related info**:
  - `kubectl version`

Client Version: v1.28.6 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.28.6

  - `kubectl get nodes -o wide`

NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kint-e01 Ready 630d v1.28.6 10.162.107.158 10.162.107.158 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3 kint-e02 Ready 42d v1.28.6 10.162.107.58 10.162.107.58 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3 kint-i01 Ready 687d v1.28.6 172.17.114.212 172.17.114.212 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3 kint-i02 Ready 687d v1.28.6 172.17.114.213 172.17.114.213 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3 kint-m01 Ready control-plane 687d v1.28.6 172.17.114.209 172.17.114.209 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3 kint-m02 Ready control-plane 687d v1.28.6 172.17.114.210 172.17.114.210 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3 kint-m03 Ready control-plane 687d v1.28.6 172.17.114.211 172.17.114.211 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3 kint-s01 Ready 687d v1.28.6 172.17.114.214 172.17.114.214 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3 kint-w01 Ready 687d v1.28.6 172.17.114.216 172.17.114.216 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3 kint-w02 Ready 687d v1.28.6 172.17.114.217 172.17.114.217 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3 kint-w03 Ready 687d v1.28.6 172.17.114.218 172.17.114.218 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3 kint-w04 Ready 687d v1.28.6 172.17.114.219 172.17.114.219 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3 kint-w05 Ready 86d v1.28.6 172.17.114.122 172.17.114.122 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3


- **How was the ingress-nginx-controller installed**:
  - If helm was used then please show output of `helm ls -A | grep -i ingress`

ingress-nginx zkezone-nginx 5 2024-01-19 13:04:59.692826584 +0100 CET deployed ingress-nginx-4.9.0 1.9.5

  - If helm was used then please show output of `helm -n <ingresscontrollernamespace> get values <helmreleasename>`

USER-SUPPLIED VALUES: controller: admissionWebhooks: patch: tolerations:

Name: zkezon2 Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.9.5 helm.sh/chart=ingress-nginx-4.9.0 Annotations: meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: zkezone2-nginx Controller: k8s.io/ingress-zkezon2 Events:

Name: zkezone Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.9.5 helm.sh/chart=ingress-nginx-4.9.0 Annotations: meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: zkezone-nginx Controller: k8s.io/ingress-zkezone Events:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/zkezone-controller ClusterIP 172.17.46.57 10.162.107.158 80/TCP,443/TCP 311d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx service/zkezone-controller-admission ClusterIP 172.17.46.162 443/TCP 311d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx service/zkezone-controller-metrics ClusterIP 172.17.47.59 10254/TCP 311d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/zkezone-controller 1/1 1 1 311d controller registry.k8s.io/ingress-nginx/controller:v1.9.5@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/zkezone-controller-5d4b6b89d6 0 0 0 311d controller k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5d4b6b89d6 replicaset.apps/zkezone-controller-64446b4f46 0 0 0 311d controller k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=64446b4f46 replicaset.apps/zkezone-controller-7f9c8d4f5d 0 0 0 42d controller registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7f9c8d4f5d replicaset.apps/zkezone-controller-84449496db 0 0 0 6d20h controller registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=84449496db replicaset.apps/zkezone-controller-857d75cdb5 1 1 1 5d23h controller registry.k8s.io/ingress-nginx/controller:v1.9.5@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=857d75cdb5


  - `kubectl -n <appnamespace> describe ing <ingressname>`

Name: rcm-planner-backend-1 Labels: app=rcm-planner-backend win.sbb.ch/argo-appname=rcm-planner-backend-test Namespace: rcm-planner-backend-test Address: 172.17.46.57 Ingress Class: zkezone Default backend: TLS: api-tls-1 terminates api-rcm-planner-test-1.mud.sbb.ch Rules: Host Path Backends


api-rcm-planner-test-1.mud.sbb.ch
/health rcm-planner-backend-health:8081 (172.17.235.193:8081) / rcm-planner-backend:8080 (172.17.235.193:8080) Annotations: nginx.ingress.kubernetes.io/auth-tls-match-cn: CN=api-haproxy-rcm-planner-test nginx.ingress.kubernetes.io/auth-tls-secret: rcm-planner-backend-test/mud-ca-cert nginx.ingress.kubernetes.io/auth-tls-verify-client: on nginx.ingress.kubernetes.io/auth-tls-verify-depth: 1 Events: Type Reason Age From Message


Normal Sync 47m (x8 over 5d) nginx-ingress-controller Scheduled for sync Normal Sync 42m (x2 over 44m) nginx-ingress-controller Scheduled for sync Normal Sync 41m nginx-ingress-controller Scheduled for sync

  - If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag
    Clients get a 403 HTTP code

- **Others**:
  - Any other related information like ;
    When applying the change of the ```nginx.ingress.kubernetes.io/auth-tls-match-cn``` value, we observe the following controller log. The log covers a section where we changed the value from an invalid CN to the valid one. The clients still get a 403 response even after the reload. After restarting the controller, we see 200 responses only

2024-01-25T11:46:33.545829457+01:00 10.162.107.158 - - [25/Jan/2024:10:46:33 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.003 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.003 200 5202192cf926110ba82ba54d7ec2140c 2024-01-25T11:46:35.272618809+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 bcd8dc4e532280f5a647ceaa52ff2413 2024-01-25T11:46:35.370921478+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 5e26a2ff61e6eb9be2cc16e9d420af80 2024-01-25T11:46:35.471442805+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 539b356ccfdcaeec3c70e3e1acbf54aa 2024-01-25T11:46:35.562090216+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.001 200 bd5698cd0b87794e932cd3ca547a2402 2024-01-25T11:46:37.061495284+01:00 I0125 10:46:37.061411 2 admission.go:149] processed ingress via admission controller {testedIngressLength:3 testedIngressTime:0.048s renderingIngressLength:3 renderingIngressTime:0.001s admissionTime:43.7kBs testedConfigurationSize:0.049} 2024-01-25T11:46:37.061495284+01:00 I0125 10:46:37.061440 2 main.go:107] "successfully validated configuration, accepting" ingress="rcm-planner-backend-int/rcm-planner-backend-1" 2024-01-25T11:46:37.067445757+01:00 I0125 10:46:37.067363 2 event.go:298] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"rcm-planner-backend-int", Name:"rcm-planner-backend-1", UID:"d40ac8ba-727e-4f6e-a8d3-1f810433a0e6", APIVersion:"networking.k8s.io/v1", ResourceVersion:"333519916", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync 2024-01-25T11:46:37.289921287+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.001 200 98b18a4c01e2b4f55a6b933abf41bdb7 2024-01-25T11:46:37.385177321+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 0f1aee2886f9979c7aa3486f21856311 2024-01-25T11:46:37.486010730+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 2c38b854643beeb78415af20c7ca6f72 2024-01-25T11:46:37.579083317+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.001 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 9165a32925096b35074add6584ace962 2024-01-25T11:46:39.306439195+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 71222835bb2ca7276a26da0fc9917f63 2024-01-25T11:46:39.402701009+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 93fa490adbca2a14b346efa3050532f3 2024-01-25T11:46:39.502599651+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - ead272c7895f7f12b35294588ac4aded 2024-01-25T11:46:39.598526523+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 25c83424ef1178d99ab1019ad0bedcbd 2024-01-25T11:46:41.323488518+01:00 10.162.107.158 - - [25/Jan/2024:10:46:41 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 cf105869e968dabe93cd9a3531852bec



**How to reproduce this issue**:
- Install an ingress
- Active client certifcate check using the annotations as described
- Use a client with valid CN
- Change the value of ```nginx.ingress.kubernetes.io/auth-tls-match-cn``` to something different than the valid CN
- Check if the client can still access the ingress.
<!---

As minimally and precisely as possible. Keep in mind we do not have access to your cluster or application.
Help up us (if possible) reproducing the issue using minikube or kind.

## Install minikube/kind

- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/

## Install the ingress controller

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/deploy.yaml

## Install an application that will act as default backend (is just an echo app)

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/http-svc.yaml

## Create an ingress (please add any additional annotation required)

echo "
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: foo-bar
    annotations:
      kubernetes.io/ingress.class: nginx
  spec:
    ingressClassName: nginx # omit this if you're on controller version below 1.0.0
    rules:
    - host: foo.bar
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: http-svc
              port: 
                number: 80
" | kubectl apply -f -

## make a request

POD_NAME=$(k get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o NAME)
kubectl exec -it -n ingress-nginx $POD_NAME -- curl -H 'Host: foo.bar' localhost

--->

**Anything else we need to know**:
No.
<!-- If this is actually about documentation, uncomment the following block -->

<!-- 
/kind documentation
/remove-kind bug
-->
longwuyuan commented 8 months ago
longwuyuan commented 8 months ago

/triage needs-information

martinbfrey commented 7 months ago

How to reproduce

Create minikube cluster

Install nginx ingress in minikube

As of this writing, minikube installs ingress-nginx 1.9.4. In my production cluster we are using 1.9.5. The behaviour is the same however.

Install sample app and create ingress

martinbfrey commented 7 months ago

There is no diff in nginx.conf before and after changing the value of nginx.ingress.kubernetes.io/auth-tls-match-cn. In fact, after changing the value the nginx.conf still contains the block:

        ## start server hello-world.info
        server {
                server_name hello-world.info ;

                listen 80  ;
                listen 443  ssl http2 ;

                set $proxy_upstream_name "-";

                if ( $ssl_client_s_dn !~ CN=testclient ) {
                        return 403 "client certificate unauthorized";
                }

The logs before changing the value and after chaning the value including a request with curl are attached.

Before changing the value: nginxlog-1.txt

After changing the value: nginxlog-2.txt

And here the resulting configuration (after changing the value, please not that the CN value is still testclient and not falseclient): nginx-2.txt

longwuyuan commented 7 months ago

@martinbfrey this is fantastic information

/tiage accepted /priority important-longterm

Since you posted that the changed CN is not reflected in the nginx.conf until a restart of the pod, I suspect that the same thing happens if vanilla non-kubernetes nginx reverseproxy was in place.

However this means that a deep dive discussion has to occur with a nginx expert and a developer on this project or with your involvement. We have community meetings as schedule seen here https://github.com/kubernetes/community/tree/master/sig-network#meetings

I request you join a meeting to make some progress on this.

cc @rikatz @tao12345666333 @cpanato @strongjz @Gacko

longwuyuan commented 7 months ago

/triage accepted

longwuyuan commented 7 months ago

/help

k8s-ci-robot commented 7 months ago

@longwuyuan: This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-help command.

In response to [this](https://github.com/kubernetes/ingress-nginx/issues/10915): >/help Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
longwuyuan commented 7 months ago

/assign

martinbfrey commented 7 months ago

I think the Equal check of the authtls annotation is missing a comparison for MatchCN.

Gacko commented 6 months ago

/assign