Closed martinbfrey closed 2 months ago
/triage needs-information
install / create minikube cluster
start cluster with minikube start
minikube addons enable ingress
verify, that pods are running with minikube kubectl -- get pods -n ingress-nginx
As of this writing, minikube installs ingress-nginx 1.9.4. In my production cluster we are using 1.9.5. The behaviour is the same however.
Deployment: minikube kubectl -- create deployment web --image=gcr.io/google-samples/hello-app:1.0
Create a file service.yml with the following contents:
apiVersion: v1
kind: Service
metadata:
name: web
spec:
ports:
- port: 8080
protocol: TCP
selector:
app: web
and apply it with minikube kubectl -- apply -f service.yml
Create a CA with openssl genrsa -des3 -out ca.key 2048
Create a root certificate with openssl req -x509 -new -nodes -key ca.key -sha256 -days 1825 -out ca.pem
Create a secret with the ca.pem in it: minikube kubectl -- create secret generic ca --from-file=ca.crt=./ca.pem
Create a key for the server certificate: openssl genrsa -out server.key 2048
Create a CSR for the server certificate: openssl req -new -key server.key -out server.csr
. Answer the question for the common name with hello-world.info
.
Create an extension file server.ext
with the contents:
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = hello-world.info
Sign the CSR with openssl x509 -req -in server.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out server.pem -days 825 -sha256 -extfile server.ext
Create the server secret with minikube kubectl -- create secret tls server --cert=./server.pem --key=./server.key
Create the client certificate key with openssl genrsa -out client.key 4096
Generate the client CSR with openssl req -new -key client.key -out client.csr -sha256 -subj '/CN=testclient'
.
Create an extension file client.ext
with the contents:
[client]
basicConstraints = CA:FALSE
nsCertType = client, email
nsComment = "Local Test Client Certificate"
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, emailProtection
Sign the CSR with openssl x509 -req -in client.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out client.pem -days 825 -sha256 -extfile client.ext -extensions client
Ingress: File ingress.yml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/auth-tls-match-cn: CN=testclient
nginx.ingress.kubernetes.io/auth-tls-secret: default/ca
nginx.ingress.kubernetes.io/auth-tls-verify-client: 'on'
nginx.ingress.kubernetes.io/auth-tls-verify-depth: '1'
spec:
tls:
- hosts:
- hello-world.info
secretName: server
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
Test with curl --resolve "hello-world.info:443:$( minikube ip )" --cacert ca.pem --cert client.pem --key client.key -i https://hello-world.info
Create a second client CSR with: openssl req -new -key client.key -out falseclient.csr -sha256 -subj '/CN=falseclient'
.
And sign it too: openssl x509 -req -in falseclient.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out falseclient.pem -days 825 -sha256 -extfile client.ext -extensions client
Test with curl --resolve "hello-world.info:443:$( minikube ip )" --cacert ca.pem --cert falseclient.pem --key client.key -i https://hello-world.info
and check for client certificate unauthorized
.
Edit the ingress and change the value of the annotation nginx.ingress.kubernetes.io/auth-tls-match-cn
to CN=falseclient
.
Test again with the two curl commands. Expectation: falseclient.pem works, client.pem fails. This is not the case however.
Restart the ingress controller with pod delete
Retry with the two curl commands. Now client.pem fails and falseclient.pem succeeds
There is no diff in nginx.conf before and after changing the value of nginx.ingress.kubernetes.io/auth-tls-match-cn
. In fact, after changing the value the nginx.conf still contains the block:
## start server hello-world.info
server {
server_name hello-world.info ;
listen 80 ;
listen 443 ssl http2 ;
set $proxy_upstream_name "-";
if ( $ssl_client_s_dn !~ CN=testclient ) {
return 403 "client certificate unauthorized";
}
The logs before changing the value and after chaning the value including a request with curl are attached.
Before changing the value: nginxlog-1.txt
After changing the value: nginxlog-2.txt
And here the resulting configuration (after changing the value, please not that the CN value is still testclient and not falseclient): nginx-2.txt
@martinbfrey this is fantastic information
/tiage accepted /priority important-longterm
Since you posted that the changed CN is not reflected in the nginx.conf until a restart of the pod, I suspect that the same thing happens if vanilla non-kubernetes nginx reverseproxy was in place.
However this means that a deep dive discussion has to occur with a nginx expert and a developer on this project or with your involvement. We have community meetings as schedule seen here https://github.com/kubernetes/community/tree/master/sig-network#meetings
I request you join a meeting to make some progress on this.
cc @rikatz @tao12345666333 @cpanato @strongjz @Gacko
/triage accepted
/help
@longwuyuan: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
/assign
I think the Equal check of the authtls annotation is missing a comparison for MatchCN.
/assign
What happened:
We run an ingress with client certificate check.
Clients with a certifcate matching the CN can access the ingress, clients with another CN or no certificate can't access - as expected. If we change the value of
nginx.ingress.kubernetes.io/auth-tls-match-cn
, the clients with the now not matching CN can still access. Clients with the new, matching CN don't have access. It looks like the Ingress is ignoring changes of thenginx.ingress.kubernetes.io/auth-tls-match-cn
value. After a controller restart, the ingress works as expected. The changed annotations look like:What you expected to happen:
Changes of
nginx.ingress.kubernetes.io/auth-tls-match-cn
are used by the ingress without controller restart.NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use
kubectl version
):Environment:
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8" ORACLE_BUGZILLA_PRODUCT_VERSION=8.9 ORACLE_SUPPORT_PRODUCT="Oracle Linux" ORACLE_SUPPORT_PRODUCT_VERSION=8.9
Linux kint-m01 4.18.0-513.9.1.el8_9.x86_64 #1 SMP Thu Nov 30 15:31:16 PST 2023 x86_64 x86_64 x86_64 GNU/Linux
Client Version: v1.28.6 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.28.6
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kint-e01 Ready 630d v1.28.6 10.162.107.158 10.162.107.158 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-e02 Ready 42d v1.28.6 10.162.107.58 10.162.107.58 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-i01 Ready 687d v1.28.6 172.17.114.212 172.17.114.212 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-i02 Ready 687d v1.28.6 172.17.114.213 172.17.114.213 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-m01 Ready control-plane 687d v1.28.6 172.17.114.209 172.17.114.209 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-m02 Ready control-plane 687d v1.28.6 172.17.114.210 172.17.114.210 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-m03 Ready control-plane 687d v1.28.6 172.17.114.211 172.17.114.211 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-s01 Ready 687d v1.28.6 172.17.114.214 172.17.114.214 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-w01 Ready 687d v1.28.6 172.17.114.216 172.17.114.216 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-w02 Ready 687d v1.28.6 172.17.114.217 172.17.114.217 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-w03 Ready 687d v1.28.6 172.17.114.218 172.17.114.218 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-w04 Ready 687d v1.28.6 172.17.114.219 172.17.114.219 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-w05 Ready 86d v1.28.6 172.17.114.122 172.17.114.122 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
ingress-nginx zkezone-nginx 5 2024-01-19 13:04:59.692826584 +0100 CET deployed ingress-nginx-4.9.0 1.9.5
USER-SUPPLIED VALUES: controller: admissionWebhooks: patch: tolerations:
effect: NoSchedule key: win.sbb.ch/external-worker affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:
kubectl describe ingressclasses
Name: zkezon2 Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.9.5 helm.sh/chart=ingress-nginx-4.9.0 Annotations: meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: zkezone2-nginx Controller: k8s.io/ingress-zkezon2 Events:
Name: zkezone Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.9.5 helm.sh/chart=ingress-nginx-4.9.0 Annotations: meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: zkezone-nginx Controller: k8s.io/ingress-zkezone Events:
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
kubectl -n <appnamespace> get all,ing -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/zkezone-controller ClusterIP 172.17.46.57 10.162.107.158 80/TCP,443/TCP 311d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx service/zkezone-controller-admission ClusterIP 172.17.46.162 443/TCP 311d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/zkezone-controller-metrics ClusterIP 172.17.47.59 10254/TCP 311d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/zkezone-controller 1/1 1 1 311d controller registry.k8s.io/ingress-nginx/controller:v1.9.5@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/zkezone-controller-5d4b6b89d6 0 0 0 311d controller k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5d4b6b89d6 replicaset.apps/zkezone-controller-64446b4f46 0 0 0 311d controller k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=64446b4f46 replicaset.apps/zkezone-controller-7f9c8d4f5d 0 0 0 42d controller registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7f9c8d4f5d replicaset.apps/zkezone-controller-84449496db 0 0 0 6d20h controller registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=84449496db replicaset.apps/zkezone-controller-857d75cdb5 1 1 1 5d23h controller registry.k8s.io/ingress-nginx/controller:v1.9.5@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=857d75cdb5
Name: rcm-planner-backend-1 Labels: app=rcm-planner-backend win.sbb.ch/argo-appname=rcm-planner-backend-test Namespace: rcm-planner-backend-test Address: 172.17.46.57 Ingress Class: zkezone Default backend:
TLS:
api-tls-1 terminates api-rcm-planner-test-1.mud.sbb.ch
Rules:
Host Path Backends
api-rcm-planner-test-1.mud.sbb.ch
/health rcm-planner-backend-health:8081 (172.17.235.193:8081) / rcm-planner-backend:8080 (172.17.235.193:8080) Annotations: nginx.ingress.kubernetes.io/auth-tls-match-cn: CN=api-haproxy-rcm-planner-test nginx.ingress.kubernetes.io/auth-tls-secret: rcm-planner-backend-test/mud-ca-cert nginx.ingress.kubernetes.io/auth-tls-verify-client: on nginx.ingress.kubernetes.io/auth-tls-verify-depth: 1 Events: Type Reason Age From Message
Normal Sync 47m (x8 over 5d) nginx-ingress-controller Scheduled for sync Normal Sync 42m (x2 over 44m) nginx-ingress-controller Scheduled for sync Normal Sync 41m nginx-ingress-controller Scheduled for sync
2024-01-25T11:46:33.545829457+01:00 10.162.107.158 - - [25/Jan/2024:10:46:33 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.003 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.003 200 5202192cf926110ba82ba54d7ec2140c 2024-01-25T11:46:35.272618809+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 bcd8dc4e532280f5a647ceaa52ff2413 2024-01-25T11:46:35.370921478+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 5e26a2ff61e6eb9be2cc16e9d420af80 2024-01-25T11:46:35.471442805+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 539b356ccfdcaeec3c70e3e1acbf54aa 2024-01-25T11:46:35.562090216+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.001 200 bd5698cd0b87794e932cd3ca547a2402 2024-01-25T11:46:37.061495284+01:00 I0125 10:46:37.061411 2 admission.go:149] processed ingress via admission controller {testedIngressLength:3 testedIngressTime:0.048s renderingIngressLength:3 renderingIngressTime:0.001s admissionTime:43.7kBs testedConfigurationSize:0.049} 2024-01-25T11:46:37.061495284+01:00 I0125 10:46:37.061440 2 main.go:107] "successfully validated configuration, accepting" ingress="rcm-planner-backend-int/rcm-planner-backend-1" 2024-01-25T11:46:37.067445757+01:00 I0125 10:46:37.067363 2 event.go:298] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"rcm-planner-backend-int", Name:"rcm-planner-backend-1", UID:"d40ac8ba-727e-4f6e-a8d3-1f810433a0e6", APIVersion:"networking.k8s.io/v1", ResourceVersion:"333519916", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync 2024-01-25T11:46:37.289921287+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.001 200 98b18a4c01e2b4f55a6b933abf41bdb7 2024-01-25T11:46:37.385177321+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 0f1aee2886f9979c7aa3486f21856311 2024-01-25T11:46:37.486010730+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 2c38b854643beeb78415af20c7ca6f72 2024-01-25T11:46:37.579083317+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.001 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 9165a32925096b35074add6584ace962 2024-01-25T11:46:39.306439195+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 71222835bb2ca7276a26da0fc9917f63 2024-01-25T11:46:39.402701009+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 93fa490adbca2a14b346efa3050532f3 2024-01-25T11:46:39.502599651+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - ead272c7895f7f12b35294588ac4aded 2024-01-25T11:46:39.598526523+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 25c83424ef1178d99ab1019ad0bedcbd 2024-01-25T11:46:41.323488518+01:00 10.162.107.158 - - [25/Jan/2024:10:46:41 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 cf105869e968dabe93cd9a3531852bec