Closed ldawert-sys11 closed 2 months ago
@ldawert-sys11: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Those other ciphers don't exist for TLSv1.3; why would you want them? TLSv1.3 has 3 standard ciphers and 2 optional ones. The TLSv1.3 spec also doesn't allow RSA and cipher naming doesn't include key exchange type (because only Diffie Hellmann is allowed) so your attempted config could never by valid for 1.3, only for 1.2.
From OpenSSL docs: "Note that changing the TLSv1.2 and below cipher list has no impact on the TLSv1.3 ciphersuite configuration."
I was trying to do something similar to this today because I'm having trouble connecting to a TLS enabled gRPC service via ingress-nginx. The backend only supports TLS 1.3, and I can connect to it via port-forward.
nginx is failing with the following in the logs:
SSL: error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:SSL alert number 70
I then ran ssldump
in the container to troubleshoot and see the following:
New TCP connection #1105: 10.0.0.232(40460) <-> 10.0.0.129(4245)
1105 1 0.0041 (0.0041) C>S Handshake
ClientHello
Version 3.3
cipher suites
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
TLS_DHE_RSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_DHE_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_256_GCM_SHA384
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_256_CBC_SHA256
TLS_RSA_WITH_AES_128_CBC_SHA256
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_EMPTY_RENEGOTIATION_INFO_SCSV
compression methods
NULL
extensions
ec_point_formats
supported_groups
session_ticket
application_layer_protocol_negotiation
encrypt_then_mac
extended_master_secret
signature_algorithms
1105 2 0.0044 (0.0003) S>C Alert
level fatal
value protocol_version
1105 0.0059 (0.0015) C>S TCP RST
1103 0.0085 (0.0078) S>C TCP FIN
I'm not seeing TLS_AES_256_GCM_SHA384
in this list, despite using TLS 1.3 which supports this. Is it possible nginx has misconfigure ciphers for TLS 1.3?
ClientHello
... I'm not seeing
TLS_AES_256_GCM_SHA384
in this list, despite using TLS 1.3 which supports this. Is it possible nginx has misconfigure ciphers for TLS 1.3?
No, that's the list of ciphers supported by the client, not the server. The ServerHello tells you which of those NGinX decided to use, in this case it rejected all of them.
@UnrealCraig ah your totally right, I mixed up the client/server hellos. Then it seems it's failing due to protocol_version.
Looking at the docs for grpc_ssl_protocols
this might be due to the default not having tls 1.3.
Default grpc_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
Yep that was it. So gRPC TLS backends that only support tls 1.3 fail because the default grpc_ssl_protocols doesn't have tls 1.3 enabled.
The following worked for gRPC with TLS termination at ingress, with TLS enabled on the backend:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "GRPCS"
nginx.ingress.kubernetes.io/server-snippet: |
grpc_ssl_protocols TLSv1.3;
cert-manager.io/cluster-issuer: "selfsigned-ca-issuer"
name: hubble-relay
namespace: kube-system
spec:
ingressClassName: nginx
rules:
- host: hubble-relay.127-0-0-1.sslip.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hubble-relay
port:
number: 443
tls:
- secretName: hubble-relay-ingress-cert
hosts:
- hubble-relay.127-0-0-1.sslip.io
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
I don't see how the gRPC solution is related to mine as there's no GRPC involved and the Backends are also not queried via TLS but rather plain HTTP.
@UnrealCraig
Those other ciphers don't exist for TLSv1.3; why would you want them?
I want to configure TLSv1.2 ciphers AND TLSv1.3 ciphersuites.
TLSv1.3 has 3 standard ciphers and 2 optional ones. The TLSv1.3 spec also doesn't allow RSA and cipher naming doesn't include key exchange type (because only Diffie Hellmann is allowed) so your attempted config could never by valid for 1.3, only for 1.2.
Could you please elaborate a bit more on this? Maybe give an example that I can try out for the naming?
From OpenSSL docs: "Note that changing the TLSv1.2 and below cipher list has no impact on the TLSv1.3 ciphersuite configuration."
I know - however as said I would like to adjust both TLSv1.2 and TLSv1.3 settings.
Thanks in advance for your help! Leon
/remove-lifecycle rotten
@ldawert-sys11 your original configuration looks correct, but notably it sets the TLS ciphers for the server block for your www2.ldawert.metakube.io
server name specifically. Is it possible the nmap test was targeting the default server block, instead of your server block with the overridden ciphers?
I.e. was nmap sending the correct Server Name Indication (SNI) value in the TLS ClientHello record? According to the nmap ssl-enum-ciphers documentation, the tls.servername
script argument would control the SNI value and I suspect it may be blank by default.
E.g. is the test result what you expect if you instead run:
$ nmap -sV --script ssl-enum-ciphers --script-args tls.servername=www2.ldawert.metakube.io -p 443 www2.ldawert.metakube.io
I'd also suggest adding your ssl_conf_command
as a http-snippet so it applies to all server blocks and repeat your test, to see if also corrects the result for your original nmap test command.
Hi @jstangroome, I tried both:
the tls.servername script argument would control the SNI value and I suspect it may be blank by default
I tried it with the --script-args tls.servername=www2.ldawert.metakube.io
option but the results stayed the same.
I'd also suggest adding your ssl_conf_command as a http-snippet
Als no success with this one. Behaviour was still the same.
It's very frustrating that this is not in the docs, because they make it look like TLS_1.2 and 1.3 can be configured together: https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers and https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-ciphers Additionally the helm chart docs that I can't seem to locate at the moment also made it look configurable for both 1.2 and 1.3. However, unfortunately the openssl project drastically changed how cipher-suite can be configured at runtime between 1.2 and 1.3, and the nginx developers have not been shy with their disapproval.
I would love to see this at least mentioned in the docs somewhere, but the config directive of 'ssl-ciphers' will only apply to TLS_1.2 (and earlier). For TLS_1.3 you have to use a generic http config directive called http-snippet that allows you to drop in any raw nginx config (and hope it's formatted correctly). This is what we have tested to work (from the ingress-nginx CM)
ssl-ciphers: ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256 http-snippet: ssl_conf_command Ciphersuites TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256;
note the ';' at the end of the http-snippet line. It's been well over a year since we added this workAround, and the latest ingress-nginx still does not have a fix.
@razholio There is not enough resources so sometimes it takes too long. If you submit a PR for fixing the docs, I am sure it will get appropriate attention.
TLSv1.2 not supported too .......
only TLSv1 and TLSv1.3 can work😓
Hi,
Can someone here say for sure that this method https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers is invalid for configuring both the TLS version and also the cipher suits !
I did a test and I can see that out of the box, this is what is offered
So I would assume that as of today, this is not a bug.
Please re-open with data on the current release of the controller and any other findings, if my assessment is not true about being able to configure TLS v1.3 and the cipher suite via configMap or the annotation.
For now I will close the issue as there are too many open issues that are inactive so skewing the info on hwat we are tracking as action-items.
/close
@longwuyuan: Closing this issue.
NGINX Ingress controller version: Release v1.1.2, build bab0fbab0c1a7c3641bd379f27857113d574d904, NGINX version nginx/1.19.9
Kubernetes version: v1.21.3
Environment:
kubectl version
:kubectl get nodes -o wide
kubectl get nodes
helm ls -A | grep -i ingress
:helm -n <ingresscontrollernamepspace> get values <helmreleasename>
helm install values
kubectl describe ingressclasses
kubectl describe ingressclasses
``` Name: nginx Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.1.2 helm.sh/chart=ingress-nginx-4.0.18 Annotations: meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: syseleven-ingress-nginx Controller: k8s.io/ingress-nginx Events:kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl get all
``` NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/ingress-nginx-controller-59bd6dd5ff-4r7gm 1/1 Running 0 4d23h 172.25.1.33 loving-carson-65bdbd7d4b-2grqgkubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl describe pod
``` Name: ingress-nginx-controller-59bd6dd5ff-tqkxg Namespace: syseleven-ingress-nginx Priority: 0 Node: loving-carson-65bdbd7d4b-dkltn/192.168.1.44 Start Time: Mon, 04 Apr 2022 11:38:01 +0200 Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/name=ingress-nginx pod-template-hash=59bd6dd5ff Annotations: cni.projectcalico.org/podIP: 172.25.0.61/32 kubectl.kubernetes.io/restartedAt: 2022-03-25T13:40:04+01:00 Status: Running IP: 172.25.0.61 IPs: IP: 172.25.0.61 Controlled By: ReplicaSet/ingress-nginx-controller-59bd6dd5ff Containers: controller: Container ID: docker://9ecceaf892ffa0b48a9e088bd0ee5fd4eaf5b02dddc9fbad19a80078c9942438 Image: k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c Image ID: docker-pullable://k8s.gcr.io/ingress-nginx/controller@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c Ports: 80/TCP, 443/TCP, 10254/TCP, 8443/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP Args: /nginx-ingress-controller --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller --election-id=ingress-controller-leader --controller-class=k8s.io/ingress-nginx --ingress-class=nginx --configmap=$(POD_NAMESPACE)/ingress-nginx-controller --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key --default-backend-service=syseleven-ingress-nginx/ingress-nginx-extension State: Running Started: Mon, 04 Apr 2022 11:38:07 +0200 Ready: True Restart Count: 0 Limits: cpu: 1 memory: 256Mi Requests: cpu: 1 memory: 256Mi Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5 Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAME: ingress-nginx-controller-59bd6dd5ff-tqkxg (v1:metadata.name) POD_NAMESPACE: syseleven-ingress-nginx (v1:metadata.namespace) LD_PRELOAD: /usr/local/lib/libmimalloc.so Mounts: /usr/local/certificates/ from webhook-cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zw5m2 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: webhook-cert: Type: Secret (a volume populated by a Secret) SecretName: ingress-nginx-admission Optional: false kube-api-access-zw5m2: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional:kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
kubectl describe svc
``` Name: ingress-nginx-controller Namespace: syseleven-ingress-nginx Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.1.2 helm.sh/chart=ingress-nginx-4.0.18 Annotations: loadbalancer.openstack.org/proxy-protocol: true meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: syseleven-ingress-nginx Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.240.20.72 IPs: 10.240.20.72 LoadBalancer Ingress: 195.192.153.120 Port: http 80/TCP TargetPort: http/TCP NodePort: http 31006/TCP Endpoints: 172.25.0.61:80,172.25.1.33:80 Port: https 443/TCP TargetPort: https/TCP NodePort: https 30351/TCP Endpoints: 172.25.0.61:443,172.25.1.33:443 Session Affinity: None External Traffic Policy: Cluster Events:kubectl -n <appnnamespace> get all,ing -o wide
kubectl get all,ing -n APP-NS
``` NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/nginx 1/1 Running 0 30d 172.25.0.50 loving-carson-65bdbd7d4b-dkltnkubectl -n <appnamespace> describe ing <ingressname>
kubectl describe ing test-ingress
``` Name: test-ingress Namespace: ldawert Address: 195.192.153.120 Default backend: default-http-backend:80 (What happened:
Trying to configure TLSv1.3 ciphers with:
The configuration made by the server-snippet is loaded correctly into the ingress controller pod into the server config block:
However when testing the TLS ciphers for example with
nmap
it shows that still the default ciphers for TLSv1.3 are being used:What you expected to happen:
Setting
ssl_conf_command Ciphersuites
vianginx.ingress.kubernetes.io/server-snippet
should configure the used TLSv1.3 ciphers for the server block it is configured in.How to reproduce it:
create app
create ingress
``` $ cat ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/ssl-ciphers: | 'ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384' nginx.ingress.kubernetes.io/server-snippet: | ssl_protocols TLSv1.2 TLSv1.3; ssl_conf_command Ciphersuites "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384"; kubernetes.io/ingress.class: nginx name: test-ingress spec: rules: - host: testdomain.local http: paths: - backend: service: name: nginx port: number: 80 path: / pathType: ImplementationSpecific $ kubectl apply -f ingress.yaml ```create debugging pod
``` $ cat netshoot.yaml apiVersion: v1 kind: Pod metadata: labels: run: tmp-shell name: tmp-shell spec: containers: - image: nicolaka/netshoot name: tmp-shell resources: {} command: - sleep - "100000" dnsPolicy: ClusterFirst restartPolicy: Always status: {} $ kubectl apply -f netshoot.yaml ```Get IP of ingress pod
Check ciphers in debugging pod
check ciphers
``` $ kubectl exec -ti tmp-shell -- bash bash-5.1$ echo "