Closed lazybetrayer closed 2 years ago
@lazybetrayer: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
What is the use case to create 2 ingress resources with identical rule?
Thanks, ; Long
On Thu, 26 Aug, 2021, 8:33 AM Kubernetes Prow Robot, < @.***> wrote:
@lazybetrayer https://github.com/lazybetrayer: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here https://git.k8s.io/community/contributors/guide/pull-requests.md. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue: repository.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/7546#issuecomment-906054561, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWQWDGIPTFPXHESHJN3T6WVHFANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .
/remove-kind bug
One use case as described here would be different ingress controller for internal vs. external traffic. We do this for instance to enforce mTLS auth for external traffic while letting internal talk to the ingress without auth while serving up the exact same content at the exact same path. I'm sure other use cases can be thought of.
This has now stopped for us and forced me to pin the chart version to 3.33.0 (last working for us, haven't tested anything after that) and I could not re-instate the previous behaviour following any documentation I could find (I tried this but it's quite hard to understand in the first place).
Any advice on how to keep using multiple ingress controllers for the same host+path combination would be highly appreciated. I would also argue that this should remain a bug as previously working behaviour has been broken.
Thanks, Nico
/kind bug
Thanks. To me myself one aspect is still unclear. I was asking about the ingress.spec host value that is a fqdn. Like api1.mydomain.com and path /. Can you elaborate why internal and external ingress will both configure same api1.mydomain.com and /.
Thanks, ; Long
On Fri, 27 Aug, 2021, 4:17 AM Nico Engelen, @.***> wrote:
One use case as described here https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-nginx-controllers would be different ingress controller for internal vs. external traffic. We do this for instance to enforce mTLS auth for external traffic while letting internal talk to the ingress without auth while serving up the exact same content at the exact same path. I'm sure other use cases can be thought of.
This has now stopped for us and forced me to pin the chart version to 3.33.0 (last working for us, haven't tested anything after that) and I could not re-instate the previous behaviour following any documentation I could find (I tried this https://kubernetes.github.io/ingress-nginx/#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-spec but it's quite hard to understand in the first place).
Any advice on how to keep using multiple ingress controllers for the same host+path combination would be highly appreciated. I would also argue that this should remain a bug as previously working behaviour has been broken.
Thanks, Nico
/kind bug
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/7546#issuecomment-906792814, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSEF2NUEATUODF4NBDT6277VANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Thanks. To me myself one aspect is still unclear. I was asking about the ingress.spec host value that is a fqdn. Like api1.mydomain.com and path /. Can you elaborate why internal and external ingress will both configure same api1.mydomain.com and /. Thanks, ; Long … On Fri, 27 Aug, 2021, 4:17 AM Nico Engelen, @.***> wrote: One use case as described here https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-nginx-controllers would be different ingress controller for internal vs. external traffic. We do this for instance to enforce mTLS auth for external traffic while letting internal talk to the ingress without auth while serving up the exact same content at the exact same path. I'm sure other use cases can be thought of. This has now stopped for us and forced me to pin the chart version to 3.33.0 (last working for us, haven't tested anything after that) and I could not re-instate the previous behaviour following any documentation I could find (I tried this https://kubernetes.github.io/ingress-nginx/#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-spec but it's quite hard to understand in the first place). Any advice on how to keep using multiple ingress controllers for the same host+path combination would be highly appreciated. I would also argue that this should remain a bug as previously working behaviour has been broken. Thanks, Nico /kind bug — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#7546 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSEF2NUEATUODF4NBDT6277VANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
In my company, a same domain can be mapped to different IPs, one for internal access and one for external access. To support this, we deploy two ingress controllers. Sometimes we create 2 ingress resources with identical rule but different ingress class, since they are used for different IPs.
Ok thanks. So just fyi, 2 controllers in one cluster are a longtime supporter and functioning feature so this is not a bug. Both controllers need to be configured with different ingressClass and the ingress objects need to be configured with a ingressClassName.
Thanks, ; Long
On Fri, 27 Aug, 2021, 7:54 AM Wang Zhen, @.***> wrote:
Thanks. To me myself one aspect is still unclear. I was asking about the ingress.spec host value that is a fqdn. Like api1.mydomain.com and path /. Can you elaborate why internal and external ingress will both configure same api1.mydomain.com and /. Thanks, ; Long … <#m6536236180590232772> On Fri, 27 Aug, 2021, 4:17 AM Nico Engelen, @.***> wrote: One use case as described here https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-nginx-controllers would be different ingress controller for internal vs. external traffic. We do this for instance to enforce mTLS auth for external traffic while letting internal talk to the ingress without auth while serving up the exact same content at the exact same path. I'm sure other use cases can be thought of. This has now stopped for us and forced me to pin the chart version to 3.33.0 (last working for us, haven't tested anything after that) and I could not re-instate the previous behaviour following any documentation I could find (I tried this https://kubernetes.github.io/ingress-nginx/#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-spec but it's quite hard to understand in the first place). Any advice on how to keep using multiple ingress controllers for the same host+path combination would be highly appreciated. I would also argue that this should remain a bug as previously working behaviour has been broken. Thanks, Nico /kind bug — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#7546 (comment) https://github.com/kubernetes/ingress-nginx/issues/7546#issuecomment-906792814>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSEF2NUEATUODF4NBDT6277VANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub .
In my company, a same domain can be mapped to different IPs, one for internal access and one for external access. To support this, we deploy two ingress controllers. Sometimes we create 2 ingress resources with identical rule but different ingress class, since they are used for different IPs.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/7546#issuecomment-906875951, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWWTTIQW5OCIDS4HSZTT63ZOJANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Please show the output of ; Kubectl describe ingressclasses Kubectl get ing ingressname -o yaml
Thanks, ; Long
On Fri, 27 Aug, 2021, 8:46 AM Yuan, @.***> wrote:
Ok thanks. So just fyi, 2 controllers in one cluster are a longtime supporter and functioning feature so this is not a bug. Both controllers need to be configured with different ingressClass and the ingress objects need to be configured with a ingressClassName.
Thanks, ; Long
On Fri, 27 Aug, 2021, 7:54 AM Wang Zhen, @.***> wrote:
Thanks. To me myself one aspect is still unclear. I was asking about the ingress.spec host value that is a fqdn. Like api1.mydomain.com and path /. Can you elaborate why internal and external ingress will both configure same api1.mydomain.com and /. Thanks, ; Long … <#m_4995458809013350320_m6536236180590232772> On Fri, 27 Aug, 2021, 4:17 AM Nico Engelen, @.***> wrote: One use case as described here https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-nginx-controllers would be different ingress controller for internal vs. external traffic. We do this for instance to enforce mTLS auth for external traffic while letting internal talk to the ingress without auth while serving up the exact same content at the exact same path. I'm sure other use cases can be thought of. This has now stopped for us and forced me to pin the chart version to 3.33.0 (last working for us, haven't tested anything after that) and I could not re-instate the previous behaviour following any documentation I could find (I tried this https://kubernetes.github.io/ingress-nginx/#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-spec but it's quite hard to understand in the first place). Any advice on how to keep using multiple ingress controllers for the same host+path combination would be highly appreciated. I would also argue that this should remain a bug as previously working behaviour has been broken. Thanks, Nico /kind bug — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#7546 (comment) https://github.com/kubernetes/ingress-nginx/issues/7546#issuecomment-906792814>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSEF2NUEATUODF4NBDT6277VANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub .
In my company, a same domain can be mapped to different IPs, one for internal access and one for external access. To support this, we deploy two ingress controllers. Sometimes we create 2 ingress resources with identical rule but different ingress class, since they are used for different IPs.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/7546#issuecomment-906875951, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWWTTIQW5OCIDS4HSZTT63ZOJANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You can see this issue https://github.com/kubernetes/ingress-nginx/issues/7538 closed as recently as yesterday, where a user is successfully using 2 controllers in one cluster.
I will change this back to kind support for now. If our triaging results in data that proves a bug of some sort, then we can set the bug label again.
/remove-kind bug /kind support
Ok thanks. So just fyi, 2 controllers in one cluster are a longtime supporter and functioning feature so this is not a bug. Both controllers need to be configured with different ingressClass and the ingress objects need to be configured with a ingressClassName. Thanks, ; Long … On Fri, 27 Aug, 2021, 7:54 AM Wang Zhen, @.> wrote: Thanks. To me myself one aspect is still unclear. I was asking about the ingress.spec host value that is a fqdn. Like api1.mydomain.com and path /. Can you elaborate why internal and external ingress will both configure same api1.mydomain.com and /. Thanks, ; Long … <#m6536236180590232772> On Fri, 27 Aug, 2021, 4:17 AM Nico Engelen, @.> wrote: One use case as described here https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-nginx-controllers would be different ingress controller for internal vs. external traffic. We do this for instance to enforce mTLS auth for external traffic while letting internal talk to the ingress without auth while serving up the exact same content at the exact same path. I'm sure other use cases can be thought of. This has now stopped for us and forced me to pin the chart version to 3.33.0 (last working for us, haven't tested anything after that) and I could not re-instate the previous behaviour following any documentation I could find (I tried this https://kubernetes.github.io/ingress-nginx/#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-spec but it's quite hard to understand in the first place). Any advice on how to keep using multiple ingress controllers for the same host+path combination would be highly appreciated. I would also argue that this should remain a bug as previously working behaviour has been broken. Thanks, Nico /kind bug — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#7546 (comment) <#7546 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSEF2NUEATUODF4NBDT6277VANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub . In my company, a same domain can be mapped to different IPs, one for internal access and one for external access. To support this, we deploy two ingress controllers. Sometimes we create 2 ingress resources with identical rule but different ingress class, since they are used for different IPs. — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#7546 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWWTTIQW5OCIDS4HSZTT63ZOJANCNFSM5C2MNKZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Let me clarify, we run 2 controllers in one cluster successfully. Both controllers are configured with different ingressClass and the ingress objects are configured with a correct ingressClassName. There are no problems with most ingress resources.
ingress classes:
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: external
spec:
controller: k8s.io/ingress-nginx-external
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: internal
spec:
controller: k8s.io/ingress-nginx-internal
controller are running with --controller-class=k8s.io/ingress-nginx-internal
/--controller-class=k8s.io/ingress-nginx-external
If we create below resources, test2
will be rejected by validating webhook: Error from server (BadRequest): error when creating "ing.yml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "www.example.com" and path "/" is already defined in ingress default/test1
. But this used to work before v1.0.0.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test1
spec:
ingressClassName: external
rules:
- host: www.example.com
http:
paths:
- backend:
service:
name: test
port:
number: 1111
path: /
pathType: Prefix
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test2
spec:
ingressClassName: internal
rules:
- host: www.example.com
http:
paths:
- backend:
service:
name: test
port:
number: 1111
path: /
pathType: Prefix
Thank you. This manifest coupled with the initial message clarifies the problem.
We will have to check why that code to validate was changed.
Can you also please confirm that both the manifests in previous message were used to create ingress resources in the same namespace. If yes, it seems you are asking for a feature. can I summarise this as ;
Also, one more request, can you please post the below data to this issue ;
- helm ls -A
- helm -n <ns> get values <releasename> # for each helm release insalled ont he cluster
- kubectl get all -A -o wide | grep -i ingress
- kubectl describe ingressclasses
- kubectl -n <ingcontrollernamespace> describe po <ingcontrollerpodname>
- kubectl -n <appnamespace> get ing <ingressname> -o yaml # forboth ingresses
/assign
Hi
We are also affected by this issue. I wanted to share our use-case, so that it's clear why this is useful. We are using nginx ingress as "main" ingress to the k8s cluster. Some applications need a more advanced ingress-controllar than what nginx offers and for this we use Skipper. In this case nginx forwards traffic to the Skipper service. Skipper must also be configured using the k8s ingress resources. Therefore you end up with two ingress resources, with the same host and path (and possible in the same namespace) but a different ingress-class.
Thanks for taking care of this issue!
I also affected by this issue. One way to avoid error on ingress creating/updating is to disable validating hook and remove it from cluster.
We've got another use case where it's important that Ingress objects are only validated by the controller for the matching ingressClass. We want our ingress controller to cache some assets, so we have a basic cache configuration in the ingress controller config:
controller:
config:
http-snippet: |
proxy_cache_path /tmp/nginx-cache levels=1:2 keys_zone=static-cache:2m max_size=100m inactive=1d use_temp_path=off;
Then in the Ingress objects itself we refer to this cache:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache static-cache;
...
Our second ingress controller is not configured for caching, but still tries to validate the Ingress objects, leading to the Ingress object being rejected by the ingress controller that will never actually handle this Ingress, because it has a different ingressClassName
:
client.go:250: [debug] error updating the resource "test-static-files":
cannot patch "test-static-files" with kind Ingress: admission webhook "validate.nginx.ingress.kubernetes.io" denied the request:
-------------------------------------------------------------------------------
Error: exit status 1
2021/10/06 15:59:54 [warn] 295#295: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg4198530908:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg4198530908:149
2021/10/06 15:59:54 [warn] 295#295: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg4198530908:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg4198530908:150
2021/10/06 15:59:54 [warn] 295#295: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg4198530908:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg4198530908:151
2021/10/06 15:59:54 [emerg] 295#295: "proxy_cache" zone "static-cache" is unknown in /tmp/nginx-cfg4198530908:2165
nginx: [emerg] "proxy_cache" zone "static-cache" is unknown in /tmp/nginx-cfg4198530908:2165
nginx: configuration file /tmp/nginx-cfg4198530908 test failed
-------------------------------------------------------------------------------
(Ignore the [warn]
, this is a separate issue with ingress-nginx, where the config template wasn't updated when upgrading nginx. The [emerg]
fails the validation here.)
So it definitely looks like a bug in ingress-nginx to me.
Having the same issue in our dev and production clusters.
We use a split-horizon DNS where we want all ingresses (hosts and paths) accessible over a VPN but only some ingresses accessible to the public.
To achieve this, we've got the same setup as @lazybetrayer - two ingress controllers (nginx-internal
, nginx-external
) with respective ingress classes, DNS set up so that, if you're on the VPN, you resolve the internal load balancer, else external/public load balancer.
The simplest way to reproduce the issue is:
$ helm create nginx
$ helm install nginx ./nginx -n default
$ kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-internal
namespace: default
spec:
ingressClassName: nginx-internal
rules:
- host: nginx.example.com
http:
paths:
- backend:
service:
name: nginx
port:
number: 80
path: /
pathType: ImplementationSpecific
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-external
namespace: default
spec:
ingressClassName: nginx-external
rules:
- host: nginx.example.com
http:
paths:
- backend:
service:
name: nginx
port:
number: 80
path: /
pathType: ImplementationSpecific
EOF
Which then returns the error:
ingress.networking.k8s.io/nginx-internal unchanged
Error from server (BadRequest): error when creating "STDIN": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "nginx.example.com" and path "/" is already defined in ingress default/nginx-internal
### $ kubectl get ingressclasses -o yaml
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
annotations:
meta.helm.sh/release-name: nginx-external-ingress-controller
meta.helm.sh/release-namespace: kube-system
creationTimestamp: "2021-10-01T09:11:05Z"
generation: 1
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-external-ingress-controller
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.0.3
helm.sh/chart: ingress-nginx-4.0.5
name: nginx-external
resourceVersion: "226814148"
selfLink: /apis/networking.k8s.io/v1/ingressclasses/nginx-external
uid: d3afa791-8fa7-473f-a2ae-c398397e6f4a
spec:
controller: k8s.io/nginx-external
- apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
meta.helm.sh/release-name: nginx-internal-ingress-controller
meta.helm.sh/release-namespace: kube-system
creationTimestamp: "2021-10-01T09:11:06Z"
generation: 1
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-internal-ingress-controller
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.0.3
helm.sh/chart: ingress-nginx-4.0.5
name: nginx-internal
resourceVersion: "226814176"
selfLink: /apis/networking.k8s.io/v1/ingressclasses/nginx-internal
uid: 449d39ee-2990-4d55-abbc-9b750795b919
spec:
controller: k8s.io/nginx-internal
kind: List
metadata:
resourceVersion: ""
selfLink: ""
@rikatz Could you maybe have a look at this issue, also considering the discussion about your changes here: https://github.com/kubernetes/ingress-nginx/pull/7341/files/f5c4dc299c2622dcc0e8ff038dac9cc4b0f4fcbb#diff-4198ec010671801881244a8052177f31bcbc682c99fbd7391bceb136025c0568
Can we have this issue classified as a bug again, please? :-)
/kind bug
I have the same use case as described by @rblaine95. I just upgraded from v0.45.0 (helm-chart 3.27.0) where the admission controller was not complaining about this. Had to disable it for now.
I have the same use case as described by @rblaine95. I just upgraded from v0.45.0 (helm-chart 3.27.0) where the admission controller was not complaining about this. Had to disable it for now.
I use v0.45.0 but this issue still appear. What you mean by disable it?
I use v0.45.0 but this issue still appear.
The problem described in this issue was introduced in https://github.com/kubernetes/ingress-nginx/pull/7341, release with controller v1 (helm chart v4). If you have a similar problem in v0.45, it's probably a different thing.
What you mean by disable it?
You can set controller.admissionWebhooks.enabled=false
when installing the helm chart.
Deploying two ingress controllers, no matter single namespace or different, will produce two ValidatingWebhookConfigurations with the same rules:
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
scope: '*'
which results in all ingress resources being submitted to both controllers for validation, in my case - nginx-ingress-private, nginx-ingress-public and ingress-alb.
I've tried to configure admissionWebhooks.objectSelector
and now each controller is validating only it's own ingresses. I guess previous version of controller was skipping all ingresses not matching ingressClass but not anymore?
Here are my helm values:
private contoller:
controller:
ingressClassByName: true
ingressClassResource:
name: nginx-private
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx-private"
electionID: ingress-controller-leader-private
admissionWebhooks:
objectSelector:
matchLabels:
ingress-class: nginx-private
public controller:
controller:
ingressClassByName: true
ingressClassResource:
name: nginx-private
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx-public"
electionID: ingress-controller-leader-public
admissionWebhooks:
objectSelector:
matchLabels:
ingress-class: nginx-public
and public ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-public
labels:
ingress-class: nginx-public
spec:
ingressClassName: nginx-public
P.S. still not sure what exactly ingressClassByName: true
but it's recommended in docs, configured electionID
to avoid potential issues as in https://github.com/kubernetes/ingress-nginx/issues/7881
@aliusmiles, thanks for the hint about admissionWebhooks.objectSelector
and electionID
.
The goal of introducing IngressClass
was to replace the deprecated annotation kubernetes.io/ingress.class
. Until this is completely solved, you could use admissionWebhooks.objectSelector.matchAnnotations.kubernetes\.io/ingress.class
and keep the old annotation instead of introducing a new label on each Ingress
.
@visit1985 as far as I know there is no matchAnnotations
selector. If there was, it would be great for this scenario.
Here I got to the same workaround as @aliusmiles, but used matchExpressions
instead, because my private ingress controller is the default in cluster and has watchIngressWithoutClass
enabled, so this way it will deal with all ingress objects that doesn't belong to the public controller:
matchExpressions:
- key: ingress-class
operator: NotIn
values:
- nginx-public
This way I only need to enforce the ingress-class labels on ingress objects that belongs to the public controller.
P.S. still not sure what exactly ingressClassByName: true but it's recommended in docs, configured electionID to avoid potential issues as in #7881
We only need to configure electionID
in case the controllers are in the same namespace, otherwise they will use the same configmap to control leader election.
ingressClassByName
was introduced for backward compatibility for those who had multiple IngressClass using the same default spec.controller
(ingress-nginx), so the controllers can locate IngressClass by it's name instead of spec.controller
, as it was before.
/remove-kind support
Having the same issue as described by @mkreidenweis-schulmngr. We have a http snippet with a custom log_format in one controller and use this log_format in ingresses that belong to this controller. The new version of nginx-ingress-controller's validating webhook rejects this ingresses, because other controllers in the cluster don't have this log_format:
[emerg] 307#307: unknown log format "prod_format" in /tmp/nginx-cfg502608730:724 nginx: [emerg] unknown log format "prod_format" in /tmp/nginx-cfg502608730:724 nginx: configuration file /tmp/nginx-cfg502608730 test
As for me it's a bug and it needs to be fixed.
I have exactly the same use-case as @rblaine95 ( split-dns scenario ). For me this is also a bug and it needs to be fixed.
@visit1985 matchAnnotations does not seem valid for k8s v1.21.6 error validating data: ValidationError(ValidatingWebhookConfiguration.webhooks[0].objectSelector): unknown field "matchAnnotations" in io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector
admissionWebhooks:
enabled: true
objectSelector:
matchAnnotations:
kubernetes.io/ingress.class: nginx-private
Has this been acknowledged as a bug? It's been 4 months, one severe CVE and this bug makes hard to update our environment since we also have different controllers with same host and same path.
I'm also facing this issue. We have two cluster wide ingress controllers, nginx and nginx-internal, in addition to an older namespaced nginx controller.
external controller
ingressClassByName: true
ingressClassResource:
name: nginx
enabled: true
default: false
controllerValue: "k8s.io/nginx"
electionID: ingress-controller-leader
...
extraArgs:
v: "1"
annotations-prefix: ingress.kubernetes.io
ingress-class: nginx
controller-class: k8s.io/nginx
internal controller
ingressClassByName: true
ingressClassResource:
name: nginx-internal
enabled: true
default: false
controllerValue: "k8s.io/nginx-internal"
electionID: ingress-controller-leader-internal
...
extraArgs:
v: "1"
annotations-prefix: ingress.kubernetes.io
ingress-class: nginx-internal
controller-class: k8s.io/nginx-internal
We have to support ingress objects both using the old annotations.kubernetes.io/ingress.class
and the new spec.ingressClassName
for both of the cluster wide controllers.
For legacy reasons we also have third namespaced nginx ingress controller with a dedicated ingress class, running independent of the two cluster wide controllers. This third controller is a little older and relies only on the annotations.kubernetes.io/ingress.class
and the validation webhook is NOT enabled in this controller. The problem is that the validation webhook enabled for the two cluster wide ingress controllers also tries to validate the ingress objects created for this third legacy controller.
If I turn off the validation webhook in the two cluster wide controllers all the three ingress controllers works as expected, and they are only picking up ingress objects with the correct class, regardless of the ingress class name being defined in annotations or using the spec.ingressClassName field.
For legacy reasons the two cluster wide controllers are using the old default annotation prefix ingress.kubernetes.io, as defined in extraArgs.annotations-prefix
.
The third namespaced controller is using the default prefix, nginx.ingress.kubernetes.io
This is an example of an ingress object that should be processed by the namspaced controller, but unfortunately the validation webhook invalidate this because of the different annotation prefix.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx-arcgis-test
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
name: test-ingress
spec:
rules:
- host: foo.bar
http:
paths:
- backend:
service:
name: backend
port:
number: 443
path: /
pathType: ImplementationSpecific
Error message
Resource: "networking.k8s.io/v1, Resource=ingresses", GroupVersionKind: "networking.k8s.io/v1, Kind=Ingress"
Name: "test-ingress", Namespace: "gdonline-db-test"
for: "C:\\Users\\palk\\AppData\\Local\\Temp\\code-stdin-xii": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: This deployment has a custom annotation prefix defined. Use 'ingress.kubernetes.io' instead of 'nginx.ingress.kubernetes.io'
Error logged by one of the the unrelated ingress controllers, that should only handle ingress class nginx-internal
I1221 09:24:33.706885 8 store.go:367] "Ignoring ingress because of error while validating ingress class" ingress="gdonline-db-test/test-ingress" error="ingress class annotation is not equal to the expected by Ingress Controller"
...
E1221 09:47:27.190755 8 main.go:90] "invalid ingress configuration" err="This deployment has a custom annotation prefix defined. Use 'ingress.kubernetes.io' instead of 'nginx.ingress.kubernetes.io'" ingress="test-ingress/gdonline-db-test"
The ingress controller clearly states that it will ignore this ingress objects, because it's of a different class, so far so good. But later it still invalidates the ingress object. This should not happen.
The admissions webhook for nginx shouldn't be validating ingresses that are handled by other controllers. It seems a bit presumptuous frankly. I would definitely appreciate this feature.
We are running two copies of nginx in our cluster (one internal, one external) and have thus far not had any issues. However, we are replacing the external instance (only) with a third party api-gateway controller using its own ingressClass. This controller supports a custom implementation-specific path syntax in the Ingress that is used to perform a form of substitution, ie. /some/{placeholder}/here
(where the placeholder is transformed based on rules configured in the controller).
Because the nginx admissions webhook is validating all Ingresses, we are unable to add these new resources even though they have a different ingress class as they fail nginx's validation.
The solution above only works if we are able to add labels to all of our existing ingresses such that we can then add a selector to both of the admissions webhooks. While simple, this isn't something easily accomplished in our org.
Since we aren't removing nginx outright (phasing out one instance and keeping the other) I will need to begin updating and re-deploying all of our application charts so that they start including a new label. Only once that entire process is complete would I be able to update the selector on the webhook. I can only hope that it doesn't conflict with the timing of introducing this new controller.
For now, since we're only just beginning this process in our development environment, I will likely just disable the admissions webhook entirely and cross my fingers that this will be accepted as a feature.
whats the ETA on this?
This was deleted by @rikatz in https://github.com/kubernetes/ingress-nginx/pull/7470/commits/704f7efe6d6bb7bc417b7eca78516ee3a0c40308#diff-4198ec010671801881244a8052177f31bcbc682c99fbd7391bceb136025c0568L224-L228 which was merged as part of #7470. The message is "* Remove ingressclass code from admission" The test for this BUG[?] was also deleted https://github.com/kubernetes/ingress-nginx/commit/704f7efe6d6bb7bc417b7eca78516ee3a0c40308#diff-31ead0c8e0237297fa8168130f1ba9c0ca0d86bb715aa2f7a67201c790c7188aL199-L201
I also think this is critical if we are not missing something. Since there is no safe/sane way to filter the ingresses which get sent to the webhook, the webhook needs to gracefully ignore any that it won't process - either because of the legacy class annotation or because of the new IngressClass
Hopefully @rikatz can chime in.
According to the comments in https://github.com/kubernetes/ingress-nginx/pull/7341/files/f5c4dc299c2622dcc0e8ff038dac9cc4b0f4fcbb#diff-4198ec010671801881244a8052177f31bcbc682c99fbd7391bceb136025c0568 it looks to me like this feature was removed mostly because it was somehow hard to do with the new ingressClass logic. (Comment says "how can we check for ingress class?")
It was actually @tao12345666333 who suggested that checking the ingress class doesn't need to be done any more. Maybe he can comment on his reasoning.
Comment says "how can we check for ingress class?"
I do see where they're coming from - with the change to ingressClass, the incoming object no longer contains enough information to figure out if it's your request or not, because instead of "object contains an annotation that matches mine, so I know it's mine" vs. "object contains an ingress class and the ingress classes contain a provider value that may be me" - now you can't know if this ingress is yours without an API lookup.
I see a few possible solutions:
chiming in :) @ElvinEfendi took a look and reimplemented the check, overall lgtm, I'm just taking some better view on it.
We can make a new release as soon as this gets merged IMO
@rikatz This was merged but it wasn't released as part of 4.0.17. Any chance we can have it released?
We can make a release
@tao12345666333 do you wanna start the release process?
Yes, I'll handle it
tested it with chart version 4.0.18, the controller with validating webhooks did still process the one it shouldn't 😢
my setup:
two controllers with ingress-classes configured: nginx and nginx-private
nginx-controller - admission webhook enabled nginx-private-controller - admission webhook disabled
created ingress for class nginx-private, logs showed that nginx-controller was rejecting it, nginx-private-controller accepting. however on nginx-controller the log stated that admission-webhook will accept this ing-object. then the ingress was stuck no LB assigned.
workaround: disabled admission webhooks on both controllers it seems to work now.
It works for me. I will double check if I have both admission webhooks enabled just in case.
Can confirm that our setup (https://github.com/kubernetes/ingress-nginx/issues/7546#issuecomment-937565614) after upgrading to 1.1.2 (Chart v4.0.18) and enabling admission webhooks, is working exactly as expected.
Unless we use the deprecated kubernetes.io/ingress.class
annotation.
But that annotation is deprecated, so I expect that.
the annotation kubernetes.io/ingress.class is for legacy reason enabled on the controllers. as soon as all ingresses are migrated gonna remove the deprecated annotation feature and check it again, thx!
update 01-06-22:
we had both ingress-controllers running in the same namespace, only the controller which was started first was able to bind the loadbalancer to the ingress-object
we did some testing and after separating the controllers into their own namespaces (nginx-private, nginx-public), everything works like a charm.
update 08-06-22: if controllers are running in the same namespace, you must also set "electionID" in the helm chart values to some unique value per controller, then it also works. otherwise the controller which is started 2nd will be just a slave to the 1st one.
We've got another use case where it's important that Ingress objects are only validated by the controller for the matching ingressClass.
@mkreidenweis-schulmngr Do you know if there is a workaround for this? I'm seeing this issue just now where I have created a second controller to bypass our CDN so we can use mTLS directly. I'm relying on http headers set by nginx in a http-snippet but getting the same error as you've described:
Error from server (BadRequest): error when creating "api-mtls-ingress.yaml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request:
-------------------------------------------------------------------------------
...
nginx: [emerg] unknown "ssl_client_s_dn_cn" variable
My configmap is setup in the same namespace as the 2nd contriller:
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-nginx-controller-9443
namespace: ingress-nginx-9443
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
http-snippet: |
map $ssl_client_s_dn $ssl_client_s_dn_cn {
default "";
~CN=(?<CN>[^/,\"]+) $CN;
}
map $ssl_client_s_dn $ssl_client_s_dn_ou {
default "";
~OU=(?<OU>[^/,\"]+) $OU;
}
map $ssl_client_s_dn $ssl_client_s_dn_dc {
default "";
~DC=(?<DC>[^/,\"]+) $DC;
}
map $ssl_client_s_dn $ssl_client_s_dn_o {
default "";
~O=(?<O>[^/,\"]+) $O;
}
map $ssl_client_s_dn $ssl_client_s_dn_c {
default "";
~C=(?<C>[^/,\"]+) $C;
}
map $ssl_client_s_dn $ssl_client_s_dn_uuid {
default "";
~UUID=(?<UUID>[^/,\"]+) $UUID;
}
It seems this issue persists. Am I missing some configuration to get around it?
@jseparovic is it possible for you to provide a step by step instructions process that someone can use on their minikube clsuter or a kind cluster, and reproduce this problem.
@longwuyuan Yep no worries. I'll update this comment once I've put something together
@longwuyuan I cannot seem to reproduce this another cluster, and since the original cluster was reconfigured to not require the configmap on the secondary controller. Cheers
@longwuyuan I cannot seem to reproduce this another cluster, and since the original cluster was reconfigured to not require the configmap on the secondary controller. Cheers
thanks for updating
I am observing this issue or something else masquerading as it:
~$ k get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
toolbox-external nginx-external chip-eastus2-e.np.dayforcehcm.com 20.*** 80 61m
toolbox-internal nginx-internal chip-eastus2-i.np.dayforcehcm.com,chip-eastus2-e.np.dayforcehcm.com 10.*** 80 61m
~$ helm get manifest toolbox | k apply -f-
...
Error from server (BadRequest): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{\"nginx.ingress.kubernetes.io/rewrite-target\":\"/external/$2\"},\"labels\":{\"app\":\"toolbox\"},\"name\":\"toolbox-external\",\"namespace\":\"chip\"},\"spec\":{\"ingressClassName\":\"nginx-external\",\"rules\":[{\"host\":\"chip-eastus2-e.np.dayforcehcm.com\",\"http\":{\"paths\":[{\"backend\":{\"service\":{\"name\":\"toolbox\",\"port\":{\"number\":80}}},\"path\":\"/toolbox(/|$)(.*)\",\"pathType\":\"Prefix\"}]}}]}}\n"},"creationTimestamp":null,"generation":null,"resourceVersion":null,"uid":null},"status":null}
to:
Resource: "networking.k8s.io/v1, Resource=ingresses", GroupVersionKind: "networking.k8s.io/v1, Kind=Ingress"
Name: "toolbox-external", Namespace: "chip"
for: "STDIN": error when patching "STDIN": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "chip-eastus2-e.np.dayforcehcm.com" and path "/toolbox(/|$)(.*)" is already defined in ingress chip/toolbox-internal
~$ k delete ing toolbox-internal
ingress.networking.k8s.io "toolbox-internal" deleted
~$ helm get manifest toolbox | k apply -f-
service/toolbox unchanged
deployment.apps/toolbox unchanged
deployment.apps/toolbox-secret-sync-csi unchanged
deployment.apps/toolbox-secret-sync-test unchanged
deployment.apps/toolbox-secret-thru-env-test unchanged
ingress.networking.k8s.io/toolbox-external configured
ingress.networking.k8s.io/toolbox-internal created
azurekeyvaultsecret.spv.no/dummy-secret-sync unchanged
azurekeyvaultsecret.spv.no/dummy-secret-thru-env unchanged
secretproviderclass.secrets-store.csi.x-k8s.io/toolbox unchanged
~$
Deleting the internal ingress allows for the external ingress to be modified and then internal ingress is created and it does not have a problem. The code runs in a deployment pipeline.
Here are the ingress definitions:
~$ helm get manifest toolbox | grep -A31 -B3 ': Ingress'
---
# Source: chip-toolbox/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app: toolbox
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /external/$2
name: toolbox-external
namespace: chip
spec:
ingressClassName: nginx-external
rules:
- host: chip-eastus2-e.np.dayforcehcm.com
http:
paths:
- path: /toolbox(/|$)(.*)
pathType: Prefix
backend:
service:
name: toolbox
port:
number: 80
---
# Source: chip-toolbox/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app: toolbox
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /internal/$2
name: toolbox-internal
namespace: chip
spec:
ingressClassName: nginx-internal
rules:
- host: chip-eastus2-i.np.dayforcehcm.com
http:
paths:
- path: /toolbox(/|$)(.*)
pathType: Prefix
backend:
service:
name: toolbox
port:
number: 80
- host: chip-eastus2-e.np.dayforcehcm.com
http:
paths:
- path: /toolbox(/|$)(.*)
pathType: Prefix
backend:
service:
name: toolbox
port:
number: 80
---
~$
The nginx version:
~$ helm ls -n internal-nginx-ingress
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
internal-nginx-ingress internal-nginx-ingress 19 2023-03-30 15:05:19.692571633 +0000 UTC deployed ingress-nginx-4.5.2 1.6.4
~$ helm ls -n external-nginx-ingress
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
external-nginx-ingress external-nginx-ingress 3 2023-06-26 17:06:41.916257728 +0000 UTC deployed ingress-nginx-4.5.2 1.6.4
~$
Please, let me know what else I can provide to help you to help me. Thank you.
NGINX Ingress controller version: v1.0.0
Kubernetes version (use
kubectl version
): v1.20.9Environment: Bare Metal
What happened:
Before v1.0.0, there's a check to skip validating ingress with a different ingressclass: https://github.com/kubernetes/ingress-nginx/blob/f3c50698d98299b1a61f83cb6c4bb7de0b71fb4b/internal/ingress/controller/controller.go#L224
In v1.0.0, this code is removed. With multiple ingress controllers, both will validate the same ingress. If we create two ingresses with same host and path but different ingressclasses, the second one will be rejected.
What you expected to happen:
skip validating ingress with a different ingressclass
/kind bug