Closed rittneje closed 5 months ago
I see this on a default install ;
% k -n ingress-nginx describe role ingress-nginx
Name: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.1.0
helm.sh/chart=ingress-nginx-4.0.13
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
events [] [] [create patch]
configmaps [] [] [get list watch create]
endpoints [] [] [get list watch]
pods [] [] [get list watch]
secrets [] [] [get list watch]
services [] [] [get list watch]
ingressclasses.networking.k8s.io [] [] [get list watch]
ingresses.networking.k8s.io [] [] [get list watch]
configmaps [] [ingress-controller-leader] [get update]
namespaces [] [] [get]
ingresses.networking.k8s.io/status [] [] [update]
[~]
% k -n ingress-nginx get cm
NAME DATA AGE
ingress-controller-leader 0 37h
ingress-nginx-controller 1 37h
kube-root-ca.crt 1 37h
[~]
Can you show what you have from a default install
/remove-kind bug /kind documentation /area docs /triage needs-information
@longwuyuan I'm not quite sure what you are asking. We had previously installed v0.35.0, where the config map was named ingress-controller-leader-nginx.
Since this change was not mentioned anywhere, my question is whether we should manually delete the old config map.
Also these docs still reference the old name. https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/rbac.md#namespace-permissions
Thanks for clarifying. What version are you on now. Show commands and ouputs.
Thanks, ; Long
On Sat, 11 Dec, 2021, 10:04 AM rittneje, @.***> wrote:
Also these docs still reference the old name. https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/rbac.md#namespace-permissions
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/8030#issuecomment-991455525, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWRWFGOQFFVC2NLNT3TUQLIFNANCNFSM5JZPP2MA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
We are currently on version 1.0.5. We install everything via kubectl
.
@rittneje I am not able to see this error message after I upgraded my ingress-nginx controller installation from 1 replica to 2 replicas ;
What am I doing wrong
I think you've misunderstood, so let me restate the issue.
When upgrading from v0.35.0, we missed updating the Role permissions to reference the new config map name. Consequently, we started seeing the aforementioned error message. While this is how we discovered the rename, the purpose of this issue is to report gaps in the documentation.
Thank you for clarification. I think upgrade from 0.35.0 is better suited to the target version 0.50.0 (latest)or any other version 0.X.X .
% helm search repo ingress-nginx --versions | head -15
NAME CHART VERSION APP VERSION DESCRIPTION
ingress-nginx/ingress-nginx 4.0.13 1.1.0 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.0.12 1.1.0 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.0.11 1.1.0 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.0.10 1.1.0 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.0.9 1.0.5 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.0.8 1.0.5 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.0.6 1.0.4 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.0.5 1.0.3 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.0.3 1.0.2 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.0.2 1.0.1 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.0.1 1.0.0 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 3.40.0 0.50.0 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 3.39.0 0.49.3 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 3.38.0 0.49.2 Ingress controller for Kubernetes using NGINX a...
The ingress-nginx controller versions 1.X.X are a migration and not really an upgrade in your use case, because of some major breaking changes. The migration and the changes are documented.
(1) But if you want to submit a docs PR, the developers would look at it.
(2) If you have the old configmap, please show k -n ingress-nginx get cm -o wide
.
Would you be interested in deleting the controller completely and re-installaing the latest v1.X.X. Or is this disruptive for you.
$ kubectl get configmap -n ingress-nginx -o wide ingress-controller-leader-nginx
NAME DATA AGE
ingress-controller-leader-nginx 0 2y122d
$ kubectl get configmap -n ingress-nginx -o yaml ingress-controller-leader-nginx
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"nginx-ingress-controller-7cb8d4595f-lkjwf","leaseDurationSeconds":30,"acquireTime":"2021-11-30T15:11:07Z","renewTime":"2021-11-30T15:11:22Z","leaderTransitions":48}'
creationTimestamp: "2019-08-12T19:07:59Z"
name: ingress-controller-leader-nginx
namespace: ingress-nginx
resourceVersion: "390663892"
selfLink: /api/v1/namespaces/ingress-nginx/configmaps/ingress-controller-leader-nginx
uid: 8047d26b-bd34-11e9-ac7e-02bea5f43cd2
I think upgrade from 0.35.0 is better suited to the target version 0.50.0 (latest)or any other version 0.X.X .
I don't understand this point. Are you saying the upgrade from 0.50.0 to v1.0.5 is smoother? Or are you saying that upgrading from v0 to v1 is not supported? If it's the latter, then it is a bug that (a) you've left your clients with no reasonable upgrade path, and (b) none of your docs even suggest that.
The ingress-nginx controller versions 1.X.X are a migration and not really an upgrade in your use case, because of some major breaking changes. The migration and the changes are documented.
Where are they documented? The changelog only mentions a few breaking changes, none of which were relevant to us:
Also I'm not sure what the distinction between "upgrade" and "migration" is here. No matter what I am creating new pods and migrating to them.
Would you be interested in deleting the controller completely and re-installaing the latest v1.X.X. Or is this disruptive for you.
This would lead to a complete service disruption for us, which is not acceptable. I suspect most ingress-nginx users will feel the same. As a general note, it should never be the case that an upgrade, even an upgrade from one major version to the next, requires a disruption.
Your latest update is very helpful. I will try to comment a little later.
meanwhile can you post the output of the command k -n ingress-nginx get cm -o wide
$ kubectl -n ingress-nginx get cm -o wide
NAME DATA AGE
ingress-controller-leader 0 12d
ingress-controller-leader-nginx 0 2y122d
istio-ca-root-cert 1 433d
nginx-configuration 6 2y122d
tcp-services 0 2y122d
udp-services 0 2y122d
ok. can you please show the command and outputs of ;
- kubectl -n ingress-nginx get sa
- kubectl -n ingress-nginx get role
- kubectl -n ingress-nginx get rolebinding
I don't have permission to read those resources in our production account, but here are the results from our staging account, which should be essentially equivalent.
$ kubectl -n ingress-nginx get sa
NAME SECRETS AGE
default 1 2y165d
nginx-ingress-serviceaccount 1 2y165d
$ kubectl -n ingress-nginx get role
NAME CREATED AT
nginx-ingress-role 2019-07-01T13:57:05Z
$ kubectl -n ingress-nginx get rolebinding
NAME ROLE AGE
nginx-ingress-role-nisa-binding Role/nginx-ingress-role 2y165d
$ kubectl -n ingress-nginx get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-ingress-controller-9b7d99b77-44jzv 2/2 Running 0 20d 10.0.152.13 ip-10-0-154-220.ec2.internal <none> <none>
pod/nginx-ingress-controller-9b7d99b77-kskcl 2/2 Running 0 20d 10.0.76.90 ip-10-0-87-206.ec2.internal <none> <none>
pod/nginx-ingress-controller-9b7d99b77-zpmpg 2/2 Running 0 20d 10.0.194.88 ip-10-0-197-87.ec2.internal <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ingress-nginx LoadBalancer 172.20.183.222 <redacted> 80:30188/TCP,443:32649/TCP 2y165d app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
service/ingress-nginx-nlb-ip LoadBalancer 172.20.211.111 <redacted> 80:30906/TCP,443:30037/TCP 139d app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/nginx-ingress-controller 3/3 3 3 2y165d nginx-ingress-controller k8s.gcr.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,nginxpod=not-co-located
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/nginx-ingress-controller-56cb59d4cb 0 0 0 44d nginx-ingress-controller k8s.gcr.io/ingress-nginx/controller:v0.35.0@sha256:fc4979d8b8443a831c9789b5155cded454cb7de737a8b727bc2ba0106d2eae8b app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,nginxpod=not-co-located,pod-template-hash=56cb59d4cb
replicaset.apps/nginx-ingress-controller-69955cc4b4 0 0 0 46d nginx-ingress-controller k8s.gcr.io/ingress-nginx/controller:v0.35.0@sha256:fc4979d8b8443a831c9789b5155cded454cb7de737a8b727bc2ba0106d2eae8b app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,nginxpod=not-co-located,pod-template-hash=69955cc4b4
replicaset.apps/nginx-ingress-controller-6f5974b9f9 0 0 0 32d nginx-ingress-controller k8s.gcr.io/ingress-nginx/controller:v0.35.0@sha256:fc4979d8b8443a831c9789b5155cded454cb7de737a8b727bc2ba0106d2eae8b app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,nginxpod=not-co-located,pod-template-hash=6f5974b9f9
replicaset.apps/nginx-ingress-controller-6f68c46445 0 0 0 44d nginx-ingress-controller k8s.gcr.io/ingress-nginx/controller:v0.35.0@sha256:fc4979d8b8443a831c9789b5155cded454cb7de737a8b727bc2ba0106d2eae8b app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,nginxpod=not-co-located,pod-template-hash=6f68c46445
replicaset.apps/nginx-ingress-controller-77b8478b96 0 0 0 45d nginx-ingress-controller k8s.gcr.io/ingress-nginx/controller:v0.35.0@sha256:fc4979d8b8443a831c9789b5155cded454cb7de737a8b727bc2ba0106d2eae8b app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,nginxpod=not-co-located,pod-template-hash=77b8478b96
replicaset.apps/nginx-ingress-controller-7bd48bd8f9 0 0 0 46d nginx-ingress-controller k8s.gcr.io/ingress-nginx/controller:v0.35.0@sha256:fc4979d8b8443a831c9789b5155cded454cb7de737a8b727bc2ba0106d2eae8b app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,nginxpod=not-co-located,pod-template-hash=7bd48bd8f9
replicaset.apps/nginx-ingress-controller-84c55464b5 0 0 0 96d nginx-ingress-controller k8s.gcr.io/ingress-nginx/controller:v0.35.0@sha256:fc4979d8b8443a831c9789b5155cded454cb7de737a8b727bc2ba0106d2eae8b app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,nginxpod=not-co-located,pod-template-hash=84c55464b5
replicaset.apps/nginx-ingress-controller-85c9594698 0 0 0 46d nginx-ingress-controller k8s.gcr.io/ingress-nginx/controller:v0.35.0@sha256:fc4979d8b8443a831c9789b5155cded454cb7de737a8b727bc2ba0106d2eae8b app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,nginxpod=not-co-located,pod-template-hash=85c9594698
replicaset.apps/nginx-ingress-controller-9b7d99b77 3 3 3 20d nginx-ingress-controller k8s.gcr.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,nginxpod=not-co-located,pod-template-hash=9b7d99b77
replicaset.apps/nginx-ingress-controller-ccb77f9 0 0 0 68d nginx-ingress-controller k8s.gcr.io/ingress-nginx/controller:v0.35.0@sha256:fc4979d8b8443a831c9789b5155cded454cb7de737a8b727bc2ba0106d2eae8b app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,nginxpod=not-co-located,pod-template-hash=ccb77f9
replicaset.apps/nginx-ingress-controller-d4b898d89 0 0 0 45d nginx-ingress-controller k8s.gcr.io/ingress-nginx/controller:v0.35.0@sha256:fc4979d8b8443a831c9789b5155cded454cb7de737a8b727bc2ba0106d2eae8b app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,nginxpod=not-co-located,pod-template-hash=d4b898d89
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/nginx-ingress-controller Deployment/nginx-ingress-controller 6m/100m, 20%/70% 3 20 3 186d
$ kubectl -n ingress-nginx get events
LAST SEEN TYPE REASON OBJECT MESSAGE
24m Normal SuccessfullyReconciled targetgroupbinding/k8s-ingressn-ingressn-6860e5f505 Successfully reconciled
24m Normal SuccessfullyReconciled targetgroupbinding/k8s-ingressn-ingressn-ff29795505 Successfully reconciled
in that case, please edit above message about staging, and add the following;
- helm ls -A
- kubectl -n ingress-nginx get all -o wide
- kubectl -n ingress-nginx get events`
We don't use helm.
Ok. Can you describe upgrade process steps.
Thanks, ; Long
On Mon, 13 Dec, 2021, 8:37 AM rittneje, @.***> wrote:
We don't use helm.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/8030#issuecomment-992066435, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSCDSIUTXOUYQQAJQ3UQVPPPANCNFSM5JZPP2MA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
We have the template yaml under source control. We referenced deploy/static/provider/aws/deploy.yaml and made the applicable changes, but as I mentioned the change to the Role specifically was missed. We then applied the update via kubectl apply -f
.
Can you confirm you have repeated error log in staging as well. If yes, then show kubectl get sa, kubectl describe sa, kubectl get role, kubectl describe role, kubectl get rolebinding, kubectl describe rolebinding.
Thanks, ; Long
On Mon, 13 Dec, 2021, 9:11 AM rittneje, @.***> wrote:
We have the template yaml under source control. We referenced deploy/static/provider/aws/deploy.yaml and made the applicable changes, but as I mentioned the change to the Role specifically was missed. We then applied the update via kubectl apply -f.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/8030#issuecomment-992081388, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWU4NRTPK2HHWERFHETUQVTNJANCNFSM5JZPP2MA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Yes, we have the same error log in staging.
Can you provide some insight on what you are intending to accomplish by having me run these various commands? The purpose of this issue is not to diagnose why we are seeing that error. We already know it's because v1 started using a new config map name, and we forgot to update our Role permissions accordingly.
Again, this is a documentation issue.
- Part of the docs are still referencing the old config map name and need to be fixed. https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/rbac.md#namespace-permissions
- What should be done with the old config map that was created by the older version of ingress-nginx? I assume we can just delete it but this should be included in the release notes.
(1) Agree with you that the doc section is not relevant to v1.X.X of the controller. I am trying to think of a solution because we are supporting v0.X.X till january end and I don't know when the change hit version 0.X.X . Please submit a docs PR because ultimately the current text will NOT be relevant in a couple of months, so we might as well make the change now.
(2) I can't predict impact on your production cluster so I am trying to map the current role, rolebinding to the configmap in question. An old pod must be spewing those messages because if you had a pod with the changed code, then it will have the RBAC privs & auths needed. Or the old cm needs to be deleted as per your guess. The error message itself it throwing me off because the error messages refers to the new nam eof the cm and not the old one. In summary, need to look at all the cm, all the roles, all the rolebindings, all the running pods and then gt an idea for the next action item to resolve this.
An old pod must be spewing those messages because if you had a pod with the changed code, then it will have the RBAC privs & auths needed.
This is not true. The pods (really the Deployment) and the Role are two separate resources in kube. We updated the Deployment to use the 1.0.5 image, but neglected to update the Role, so the service account does not have permission to update the config map with the new name (ingress-controller-leader). Consequently, we get those errors from the 1.0.5 image.
In other words, our Role currently says this:
- apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
verbs:
- get
- update
but it is supposed to say this:
- apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader
verbs:
- get
- update
I wonder if you can live edit a role with kubectl edit.
Thanks, ; Long
On Mon, 13 Dec, 2021, 11:32 AM rittneje, @.***> wrote:
An old pod must be spewing those messages because if you had a pod with the changed code, then it will have the RBAC privs & auths needed.
This is not true. The pods (really the Deployment) and the Role are two separate resources in kube. We updated the Deployment to use the 1.0.5 image, but neglected update the Role, so the service account does not have permission to update the config map with the new name (ingress-controller-leader). Consequently, we get those errors from the 1.0.5 image.
In other words, our Role currently says this:
- apiGroups:
- '' resources:
- configmaps resourceNames:
- ingress-controller-leader-nginx verbs:
- get
- update
but it is supposed to say this:
- apiGroups:
- '' resources:
- configmaps resourceNames:
- ingress-controller-leader verbs:
- get
- update
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/8030#issuecomment-992135724, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSFSIMFEPOLX5MK3KTUQWEADANCNFSM5JZPP2MA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
I would assume so, but regardless I can confirm that applying the change normally via kubectl apply
stops the error log without needing to restart the pods.
ok thanks. Its not a good thing to experience the issue. I acknowledge that and really appreciate that we got this feedback. But I still don't see a strong case as to identifying this as a critical mistake in the docs but there is always room for improvement, and this one could be like a "gotcha" to list and warn users about, if they follow your process of upgrade. So, please update if you have interest in submitting a Docs PR. I am not sure what the precise content would be, if I create a docs PR for this myself.
Also, thanks for confirming that you can apply a change and stop the error messages. Beyond that I don't know if I can recreate by first installing v0.35.0 and then upgrade to v1.X.X and see 2 conflicting configmaps on AWS. My guess is that the new code will not be looking for the old-name-configmap so it would be ok to delete and expect no impact. But at the same time, as per the old adage of "ain't broken, don't fix", its your choice to delete that old cm or leave it there, until next maintenance window. Mostly because its going to take a while for someone to reproduce and be dead sure of the impact.
Thanks, ; Long Wu Yuan
On 12/13/21 7:10 PM, rittneje wrote:
I would assume so, but regardless I can confirm that applying the change normally via |kubectl apply| stops the error log without needing to restart the pods.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/8030#issuecomment-992485977, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWW66SOCZ5UJD6ACOBDUQXZTBANCNFSM5JZPP2MA. Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
We've run into this too. There's actually even a note in the source stating the change https://github.com/kubernetes/ingress-nginx/blob/ab4fa4246797a33bc7b45b45097a91086fe9ad48/internal/ingress/controller/nginx.go#L261-L264 unfortunately, there's no history to it as it's all just one giant commit.
Previously, the configmap was named ${electionID}-${ingressClass}
. But now it's just the election id. In our case, it caused problems running multiple ingress controllers with different classes in the same namespace which used to be fine without setting the election id in 0.x. With 1.x, the election id must be explicitly set to be different otherwise some ingresses don't get their status updated.
We've run into this too. There's actually even a note in the source stating the change
unfortunately, there's no history to it as it's all just one giant commit. Previously, the configmap was named
${electionID}-${ingressClass}
. But now it's just the election id. In our case, it caused problems running multiple ingress controllers with different classes in the same namespace which used to be fine without setting the election id in 0.x. With 1.x, the election id must be explicitly set to be different otherwise some ingresses don't get their status updated.
Nice catch. It would be nice if someone could make the changes to the code to make it work as it was before, unless there is a reason why it's this way and perhaps document them...
@rittneje: I've solved by editing manually the role and adding an additional entry to the ClusterRole named with the OLD leader name right after the new ConfigMap was created.
So step by step would be: 1) Edit Cluster Role and add new object to the rules 2) Update helm release / yaml An example below:
- verbs:
- get
- update
apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader
+ - verbs:
+ - get
+ - update
+ apiGroups:
+ - ''
+ resources:
+ - configmaps
+ resourceNames:
+ - ingress-controller-leader-nginx
This is not a solution but at least a workaround. This shouldn't make you have any disruption.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Is this issue removed now that we moved to the leader election to leases?
https://github.com/kubernetes/ingress-nginx/pull/8733
@tao12345666333 I think we may need to update the docs as well https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/rbac.md#namespace-permissions that may have been missed in the 1.3.0 release.
/priority backlog /triage accepted /kind documentation /lifecycle frozen /assign @tao12345666333
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
/triage accepted
(org members only)/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
The issue appears to still be present. Every time I've upgraded ingress-nginx over the last year, I've had to go into the Cluster Roles and manually add:
- verbs:
- get
- update
apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
...to the bottom of the file and save it before the upgrade will complete.
@darkpixel can you post the diff between https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/cloud/deploy.yaml and your file
Here's my blob. It's currently working.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx
uid: e702d66e-98ea-4ee3-867b-054fd3974f7f
resourceVersion: '370765475'
creationTimestamp: '2021-10-29T19:08:18Z'
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
helm.sh/chart: ingress-nginx-4.9.1
k8slens-edit-resource-version: v1
annotations:
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: default
managedFields:
- manager: helm
operation: Update
apiVersion: rbac.authorization.k8s.io/v1
time: '2024-02-16T21:27:13Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/managed-by: {}
f:app.kubernetes.io/name: {}
f:app.kubernetes.io/part-of: {}
f:app.kubernetes.io/version: {}
f:helm.sh/chart: {}
- manager: node-fetch
operation: Update
apiVersion: rbac.authorization.k8s.io/v1
time: '2024-02-21T19:13:55Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:k8slens-edit-resource-version: {}
f:rules: {}
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/ingress-nginx
rules:
- verbs:
- list
- watch
apiGroups:
- ''
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
- namespaces
- verbs:
- list
- watch
apiGroups:
- coordination.k8s.io
resources:
- leases
- verbs:
- get
apiGroups:
- ''
resources:
- nodes
- verbs:
- get
- list
- watch
apiGroups:
- ''
resources:
- services
- verbs:
- get
- list
- watch
apiGroups:
- networking.k8s.io
resources:
- ingresses
- verbs:
- create
- patch
apiGroups:
- ''
resources:
- events
- verbs:
- update
apiGroups:
- networking.k8s.io
resources:
- ingresses/status
- verbs:
- get
- list
- watch
apiGroups:
- networking.k8s.io
resources:
- ingressclasses
- verbs:
- list
- watch
- get
apiGroups:
- discovery.k8s.io
resources:
- endpointslices
- verbs:
- get
- update
apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-nginx
- verbs:
- get
- update
apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
So the below is what you added at https://github.com/kubernetes/ingress-nginx/blob/7edcdeefe5638fd47ae80a9a8b48884bef3e35c8/deploy/static/provider/cloud/deploy.yaml#L220
- verbs:
- get
- update
apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-nginx
- verbs:
- get
- update
apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
@darkpixel can you explicitly provide the below info ;
@tao12345666333 @strongjz @rikatz is this a valid claim
This is an old kubernetes cluster that has been running for almost 3 years. I don't recall the versions I started with. I'm fairly certain if I blew away my ingress / load balancer and re-created it the issues would go away, but that would be a fairly significant outage.
EDIT:
I installed ingress using:
helm install ingress-nginx ingress-nginx/ingress-nginx --set=controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-name"=lbs --set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-enable-proxy-protocol"="true" --set=controller.service.externalTrafficPolicy=Local --set=controller.kind=DaemonSet --set=controller.ingressClassResource.name=nginx --set=controller.ingressClassResource.default=true --wait --debug
ok. need to find a way to reproduce.
The original issue-descption about documenting the leader election has been done. Since there is no traction on this issue and there is much deviation in the later comments here from the original issue-description, this issue will actually confuse readers on the original-topic of the issue.
There are too many open issues without traction and without accurate tracking value add, so I will close this issue for now. If the original creator of the issue thinks there is still a problem with the documentation of the leader election feature, then please re-open the issue and post the data relevant to the problem in the docs.
Thanks.
/close
@longwuyuan: Closing this issue.
Apparently in the v1 release you renamed the ingress-controller-leader-nginx config map to just ingress-controller-leader. However, this change was not documented. https://github.com/kubernetes/ingress-nginx/blob/f207702c3080d679ae86307eafbc16b0d3098e6c/deploy/static/provider/aws/deploy.yaml#L194
Without updating the role, we now get errors in our logs during leader election.
Also, do we need to delete the old ingress-controller-leader-nginx config map via kubectl if it still exists?
/kind documentation