Closed mattmattox closed 2 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
@rikatz could you have a look at this one. There is also a PR, but as far as I understand, we don't want to do this. Also it is not considered being a bug.
/remove-kind bug /kind design
/kind design
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-kind design
/kind feature
kind/design
is migrated to kind/feature
, see https://github.com/kubernetes/community/issues/6144 for more details
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
This problem is generally solved by having whatever is in the CN be also part of the SAN list.
From a standards perspective, the SANs list is supposed to be preferred (ref)
If a subjectAltName extension of type dNSName is present, that MUST be used as the identity. Otherwise, the (most specific) Common Name field in the Subject field of the certificate MUST be used. Although the use of the Common Name is existing practice, it is deprecated and Certification Authorities are encouraged to use the dNSName instead.
Change the curl to send host header as foo.rancher.example.com . The behaviour posted is expected as per config and that is evident from the error message.
/close
@longwuyuan: Closing this issue.
NGINX Ingress controller version: 0.35.0
Kubernetes version (use
kubectl version
): 1.19.7Environment:
uname -a
): Linux b1ubphyp01.support.tools 5.4.0-65-generic #73-Ubuntu SMP Mon Jan 18 17:25:17 UTC 2021 x86_64 x86_64 x86_64 GNU/LinuxWhat happened:
What you expected to happen:
ingress-nginx should check the common name first for a match then check the Subject Alternative Names.
How to reproduce it:
Install minikube/kind
Install the ingress controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml
Install an application that will act as default backend (is just an echo app)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/http-svc.yaml
Create a self-signed cert
echo " [ req ] prompt = no distinguished_name = server_distinguished_name req_extensions = v3_req
[ server_distinguished_name ] commonName = rancher.example.com stateOrProvinceName = IL countryName = US emailAddress = support@rancher.com organizationName = Rancher Labs organizationalUnitName = Rancher Support
[ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names
[ alt_names ] DNS.0 = .rancher.example.com DNS.1 = .rancher.local
" > req.conf openssl req -x509 -nodes -days 730 -newkey rsa:2048 -keyout tls.key -out tls.crt -config req.conf -extensions 'v3_req' kubectl create secret tls test-cert --cert=tls.crt --key=tls.key
Create an ingress
echo " apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: foo-bar spec: rules:
make a request
POD_NAME=$(k get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o NAME) kubectl exec -it -n ingress-nginx $POD_NAME -- curl -H 'Host: rancher.example.com' https://localhost
You'll get the fake cert
Anything else we need to know:
This caused by https://github.com/kubernetes/ingress-nginx/blob/a268ec493c6a1d81d0932f9aeedd1781df2fe7b1/internal/ingress/controller/certificate.go#L59
/kind bug