Open captainswain opened 6 months ago
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/remove-kind bug
/triage needs-information
Hi there @longwuyuan,
First off hello and thanks for the reply!
Can you post the links to docs/references about host not being required. Asking because I think there is text out there that hosts in tls and hosts in http fields must match.
Hi there I could not find specific documentation regarding the host not being required. In this case wouldn't the hosts and the tls hosts match on the wildcard?
Also I read that the server-alias implementation just copies the config of the host in a new server block, and juts sets the server-block name to the value of the alias.
Here is the server block with the alias created, it looks identical outside of the addition of the domain under the server_name
.
## start server test.random.bar.example
server {
server_name test.random.bar.example test.cluster.foo.example ;
http2 on;
listen 80 ;
listen [::]:80 ;
listen 443 ssl;
listen [::]:443 ssl;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace "ingress-nginx";
set $ingress_name "test-ingress";
set $service_name "http-svc";
set $service_port "80";
set $location_path "/";
set $global_rate_limit_exceeding n;
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
ssl_redirect = true,
force_no_ssl_redirect = false,
preserve_trailing_slash = false,
use_port_in_redirects = false,
global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
})
balancer.rewrite()
plugins.run()
}
# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
# other authentication method such as basic auth or external auth useless - all requests will be allowed.
#access_by_lua_block {
#}
header_filter_by_lua_block {
lua_ingress.header()
plugins.run()
}
body_filter_by_lua_block {
plugins.run()
}
log_by_lua_block {
balancer.log()
plugins.run()
}
port_in_redirect off;
set $balancer_ewma_score -1;
set $proxy_upstream_name "ingress-nginx-http-svc-80";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
set $proxy_alternative_upstream_name "";
client_max_body_size 1m;
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Forwarded-Scheme $pass_access_scheme;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 4k;
proxy_max_temp_file_size 1024m;
proxy_request_buffering on;
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 3;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
}
## end server test.random.bar.example
If you add another host to the ingress rules I do see a new server block created and the certificate works as intended.
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev
on Kubernetes Slack.
What happened:
Reopening issue which is the same as #4832
I defined an ingress resource with a server alias on a separate domain, using the nginx.ingress.kubernetes.io/server-alias, and 2 certificates one wildcard that matches the primary domain and a wildcard host that matches the alias. When sending a request that matches the alias but not the primary host, the fake self-signed certificate is used. When sending a request that matches the primary host, the configured certificate is used. If I manually specify a different subdomain on the
server-alias
as a host in the ingress the certificate is loaded as intended for theservice-alias
subdomain.What you expected to happen: I expected to receive the configured certificate for the
server-alias
used in the ingress, without having to define it as a host.NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
NGINX Ingress controller Release: v1.10.0 Build: 71f78d49f0a496c31d4c19f095469f3f23900f8a Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.25.3
Kubernetes version (use
kubectl version
): Client Version: v1.28.7+k3s1 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.28.7+k3s1Environment:
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release): Fedora 39
Kernel (e.g.
uname -a
): Linux fedora 6.6.13-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Jan 20 18:03:28 UTC 2024 x86_64 GNU/LinuxInstall tools:
Basic cluster related info:
How was the ingress-nginx-controller installed:
helm ls -A | grep -i ingress
helm -n <ingresscontrollernamespace> get values <helmreleasename>
Current State of the controller:
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default service/kubernetes ClusterIP 10.43.0.1 443/TCP 170m
kube-system service/kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 170m k8s-app=kube-dns
kube-system service/metrics-server ClusterIP 10.43.85.57 443/TCP 170m k8s-app=metrics-server
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.43.51.35 443/TCP 141m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
default service/demo ClusterIP 10.43.129.220 80/TCP 140m app=demo
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.43.28.240 192.168.1.109 80:32206/TCP,443:31521/TCP 141m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
default service/echo-test ClusterIP 10.43.24.96 80/TCP 64m app=echo-test
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR kube-system daemonset.apps/svclb-ingress-nginx-controller-c11be5c2 1 1 1 1 1 141m lb-tcp-80,lb-tcp-443 rancher/klipper-lb:v0.4.5,rancher/klipper-lb:v0.4.5 app=svclb-ingress-nginx-controller-c11be5c2
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR kube-system deployment.apps/local-path-provisioner 1/1 1 1 170m local-path-provisioner rancher/local-path-provisioner:v0.0.26 app=local-path-provisioner kube-system deployment.apps/coredns 1/1 1 1 170m coredns rancher/mirrored-coredns-coredns:1.10.1 k8s-app=kube-dns default deployment.apps/demo 1/1 1 1 140m nginx nginx app=demo ingress-nginx deployment.apps/ingress-nginx-controller 1/1 1 1 141m controller registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx kube-system deployment.apps/metrics-server 1/1 1 1 170m metrics-server rancher/mirrored-metrics-server:v0.6.3 k8s-app=metrics-server default deployment.apps/echo-test 2/2 2 2 64m echo-test nginxdemos/hello app=echo-test
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR kube-system replicaset.apps/local-path-provisioner-6c86858495 1 1 1 170m local-path-provisioner rancher/local-path-provisioner:v0.0.26 app=local-path-provisioner,pod-template-hash=6c86858495 kube-system replicaset.apps/coredns-6799fbcd5 1 1 1 170m coredns rancher/mirrored-coredns-coredns:1.10.1 k8s-app=kube-dns,pod-template-hash=6799fbcd5 default replicaset.apps/demo-5f7bb54887 1 1 1 140m nginx nginx app=demo,pod-template-hash=5f7bb54887 ingress-nginx replicaset.apps/ingress-nginx-controller-6dc8c8fdf4 1 1 1 141m controller registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=6dc8c8fdf4 kube-system replicaset.apps/metrics-server-67c658944b 1 1 1 170m metrics-server rancher/mirrored-metrics-server:v0.6.3 k8s-app=metrics-server,pod-template-hash=67c658944b default replicaset.apps/echo-test-864d879bcf 2 2 2 64m echo-test nginxdemos/hello app=echo-test,pod-template-hash=864d879bcf
Name: ingress-nginx-controller-6dc8c8fdf4-jwwbp Namespace: ingress-nginx Priority: 0 Service Account: ingress-nginx Node: fedora/192.168.1.109 Start Time: Mon, 04 Mar 2024 13:59:18 -0800 Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.10.0 helm.sh/chart=ingress-nginx-4.10.0 pod-template-hash=6dc8c8fdf4 Annotations:
Status: Running
IP: 10.42.0.3
IPs:
IP: 10.42.0.3
Controlled By: ReplicaSet/ingress-nginx-controller-6dc8c8fdf4
Containers:
controller:
Container ID: containerd://7bb9af2efeeef87b0e9afa6a3cb57a508159bef066a8f034eed5ad056126bc4c
Image: registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
Image ID: registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
SeccompProfile: RuntimeDefault
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-nginx-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
--enable-metrics=false
State: Running
Started: Mon, 04 Mar 2024 14:31:28 -0800
Last State: Terminated
Reason: Unknown
Exit Code: 255
Started: Mon, 04 Mar 2024 13:59:30 -0800
Finished: Mon, 04 Mar 2024 14:31:26 -0800
Ready: True
Restart Count: 1
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-6dc8c8fdf4-jwwbp (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hv6n6 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-hv6n6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal RELOAD 30m (x8 over 110m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Name: ingress-nginx-controller Namespace: ingress-nginx Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.10.0 helm.sh/chart=ingress-nginx-4.10.0 Annotations: meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: ingress-nginx Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.43.28.240 IPs: 10.43.28.240 LoadBalancer Ingress: 192.168.1.109 Port: http 80/TCP TargetPort: http/TCP NodePort: http 32206/TCP Endpoints: 10.42.0.3:80 Port: https 443/TCP TargetPort: https/TCP NodePort: https 31521/TCP Endpoints: 10.42.0.3:443 Session Affinity: None External Traffic Policy: Cluster Events:
helm upgrade --install ingress-nginx ingress-nginx \ --repo https://kubernetes.github.io/ingress-nginx \ --namespace ingress-nginx --create-namespace
Create a root CA
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout rootCA.key -out rootCA.pem -subj "/C=US/ST=New York/L=New York/O=Example Inc./OU=Root CA/CN=example.com"
cat > cluster_foo_example.conf <<EOF [req] default_bits = 2048 prompt = no default_md = sha256 x509_extensions = v3_req distinguished_name = dn
[dn] C = US ST = New York L = New York O = "Example, Inc." OU = "Cluster Foo Example" CN = cluster.foo.example
[v3_req] subjectAltName = @alt_names
[alt_names] DNS.1 = cluster.foo.example DNS.2 = *.cluster.foo.example EOF
openssl req -new -nodes -x509 -newkey rsa:2048 -keyout cluster_foo_example.key -out cluster_foo_example.crt -config cluster_foo_example.conf -days 365
cat > random_bar_example.conf <<EOF [req] default_bits = 2048 prompt = no default_md = sha256 x509_extensions = v3_req distinguished_name = dn
[dn] C = US ST = California L = San Francisco O = "Bar Inc." OU = "Random Bar Example" CN = random.bar.example
[v3_req] subjectAltName = @alt_names
[alt_names] DNS.1 = random.bar.example DNS.2 = *.random.bar.example EOF
openssl req -new -nodes -x509 -newkey rsa:2048 -keyout random_bar_example.key -out random_bar_example.crt -config random_bar_example.conf -days 365
Create the Kubernetes secret for cluster.foo.example
kubectl create secret tls cluster-foo-tls \ --key cluster_foo_example.key \ --cert cluster_foo_example.crt \ --namespace ingress-nginx
Create the Kubernetes secret for random.bar.example
kubectl create secret tls random-bar-tls \ --key random_bar_example.key \ --cert random_bar_example.crt \ --namespace ingress-nginx
echo " apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/server-alias: 'test.cluster.foo.example' name: test-ingress namespace: ingress-nginx spec: ingressClassName: nginx tls:
➜ curl --insecure -vvI https://test.random.bar.example 2>&1 | awk 'BEGIN { cert=0 } /^* SSL connection/ { cert=1 } /^*/ { if (cert) print }'
➜ curl --insecure -vvI https://test.cluster.foo.example/ 2>&1 | awk 'BEGIN { cert=0 } /^* SSL connection/ { cert=1 } /^*/ { if (cert) print }'