Open cha7ri opened 1 year ago
@cha7ri: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
[~/Documents/scratch/ingress-nginx-issues/issue-8972]
% diff nginx.conf.original nginx.conf.1
[~/Documents/scratch/ingress-nginx-issues/issue-8972]
% diff nginx.conf.original nginx.conf.2
[~/Documents/scratch/ingress-nginx-issues/issue-8972]
% k get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
cafe-ingress <none> cafe.example.com 80, 443 3m19s
cafe-ingress-1 <none> dummy.example.com,cafe.example.com 80, 443 61s
[~/Documents/scratch/ingress-nginx-issues/issue-8972]
% ls -la
total 64
drwxrwxr-x 2 me me 4096 Aug 24 23:24 .
drwxrwxr-x 3 me me 4096 Aug 24 23:15 ..
-rw-rw-r-- 1 me me 826 Aug 24 23:23 cafe-ingress-1.yaml
-rw-rw-r-- 1 me me 588 Aug 24 23:21 cafe-ingress.yaml
-rw-rw-r-- 1 me me 14322 Aug 24 23:22 nginx.conf.1
-rw-rw-r-- 1 me me 14322 Aug 24 23:24 nginx.conf.2
-rw-rw-r-- 1 me me 14322 Aug 24 23:16 nginx.conf.original
[~/Documents/scratch/ingress-nginx-issues/issue-8972]
You can post more accurate reproduce procedure and more real data from your cluster and the controller like nginx.conf and describe output etc
You can also upgrade to the latest release of the controller before you try to gather data
/remove-kind bug
@longwuyuan Sorry for not been specific about the reproduce procedure. I updated and tested it locally. please let me know if I have to provide more information.
I think you are saying that the below config should be caught by webhook. Currently I am not sure why it is not catching that. Maybe there should be a check and a test for it. But I think it is a waste of time and resource to deal with the use case implied by your reproduce code. Explaining is below The bigger problem is something else which you don't want to comment on even after I pointed it out or you want to solve some problem that is not clear from the info in this issue.
I pointed out that the below does not seem to meet the ingress object spec ;
...
spec:
tls:
- hosts:
- cafe.example.com
secretName: cafe-secret
rules:
- host: "dummy.example.com"
...
The reason is this explain below ;
% k explain ingress.spec.tls
KIND: Ingress
VERSION: networking.k8s.io/v1
RESOURCE: tls <[]Object>
DESCRIPTION:
TLS configuration. Currently the Ingress only supports a single TLS port,
443. If multiple members of this list specify different hosts, they will be
multiplexed on the same port according to the hostname specified through
the SNI TLS extension, if the ingress controller fulfilling the ingress
supports SNI.
IngressTLS describes the transport layer security associated with an
Ingress.
FIELDS:
hosts <[]string>
Hosts are a list of hosts included in the TLS certificate. The values in
this list must match the name/s used in the tlsSecret. Defaults to the
wildcard host setting for the loadbalancer controller fulfilling this
Ingress, if left unspecified.
secretName <string>
SecretName is the name of the secret used to terminate TLS traffic on port
443. Field is left optional to allow TLS routing based on SNI hostname
alone. If the SNI host in a listener conflicts with the "Host" header field
used by an IngressRule, the SNI host is used for termination and value of
the Host header is used for routing.
[~]
% k explain ingress.spec.rules.host
KIND: Ingress
VERSION: networking.k8s.io/v1
FIELD: host <string>
DESCRIPTION:
Host is the fully qualified domain name of a network host, as defined by
RFC 3986. Note the following deviations from the "host" part of the URI as
defined in RFC 3986: 1. IPs are not allowed. Currently an IngressRuleValue
can only apply to the IP in the Spec of the parent Ingress.
2. The `:` delimiter is not respected because ports are not allowed.
Currently the port of an Ingress is implicitly :80 for http and :443 for
https. Both these may change in the future. Incoming requests are matched
against the host before the IngressRuleValue. If the host is unspecified,
the Ingress routes all traffic based on the specified IngressRuleValue.
Host can be "precise" which is a domain name without the terminating dot of
a network host (e.g. "foo.bar.com") or "wildcard", which is a domain name
prefixed with a single wildcard label (e.g. "*.foo.com"). The wildcard
character '*' must appear by itself as the first DNS label and matches only
a single label. You cannot have a wildcard label by itself (e.g. Host ==
"*"). Requests will be matched against the Host field in the following way:
1. If Host is precise, the request matches this rule if the http host
header is equal to Host. 2. If Host is a wildcard, then the request matches
this rule if the http host header is to equal to the suffix (removing the
first label) of the wildcard rule.
So to me, with my limited knowledge, I expect the hostname dummy.example.com in the tls.hosts list and i expect that the cert cafe-secret should have that fqdn in the subject or alt-subject fields. But you have made no comments about it so confused on that aspect.
Next, I pointed out that the issue template asks questions which seem to have been completely ignored by you, even after I pointed it out earlier. So there is no logs of the controller pod visible to know what happened when you say you tested locally.
And there is information like the curl commands and responses of your local testing, that would have helped. I am sure you don't own the domain example.com. So I guess you would have created a self signed certificate for that. Your commands and outputs related to creating that single certificate with subject and alt-name like cafe.example.com and dummy.example.com, would help too.
Self-signed certs would fail CA check so how you got around that to do local-testing would help too.
Please consider all this and repost info asked in issue template and update.
Also please consider discussing this with other folks in the ingress-nginx-users channel of the kubernetes slack. There are more people there and there is lack of resources here on github to work on support issues like this.
Once you talk to other people and find data that is proof of a bug or a problem with the controller, Please post all that data requested above and also the proof of the bug/problem, so it becomes easy for a developer to solve the problem.
I confirm the bug, I'm facing the exact same issue you described. I think that the line you highlighted is the culprit, I'd suggest to open a PR to remove it.
I checked the actual generated rules, the nginx.conf actually generated in the two rules is the same (the part about cafe.example.com)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
What happened: I did a quick investigation and the issue seems to be here this code check only the first host and return
nil
if it doesn't overlap. More details below:The admission webhook is not working as expect: It doesn't prevent to user from creating two ingress resources that have the same host/path combination. Example:
cafe-ingress.yaml
cafe-ingress-1
In the example above both ingresses will be created. But if we change the
cafe-ingress.yaml
and makehost: "cafe.example.com"
the first host, it will fail with"cafe-ingress-1": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "cafe.example.com" and path "/tea" is already defined in ingress default/cafe-ingress
What you expected to happen:
It shouldn't create both ingresses even if we we put
cafe.example.com
as a second host.NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use
kubectl version
):How to reproduce this issue:
Create a new cluster using kind
Install ingress nginx controller
Make sure it's running
Apply the first ingress
Apply the second ingress
Check that both ingresses are created
Check that nginx config has changed
Edit the second ingress
You should see the error below