kubernetes / cloud-provider-aws

Cloud provider for AWS
https://cloud-provider-aws.sigs.k8s.io/
Apache License 2.0
381 stars 300 forks source link

Newly autoscaled worker-nodes not added to the targets of Network Loadbalancer. #824

Closed Venkat-pulagam closed 2 months ago

Venkat-pulagam commented 7 months ago

Hello Everyone,

We have a kops managed cluster with ingress-controller installed for exposing the services over internet. Ingress controller provisions network load balancer with External traffic policy as "local". Instance group has aws spot instance as a worker nodes. Due aws spot Interruptions or workloads requirement autoscaling group is adding the new worker node to the k8s cluster.

What happened: After upgrading the kubernetes version from v1.25.4 to v1.27.8. Newly provisioned worker node is not registering to the target group of Ingress-controller's Network loadbalancer(NLB) until another new worker node has to join to the cluster or terminating old worker node to the cluster.
we are observing the below warning logs on the aws-cloud-controller-manager pods logs as newly joined node has failed to get provider id to register to the NLB target group. node "i-0d6a5fa94ff4xxx" did not have ProviderID set.

What you expected to happen: Autoscaled/newly provisioned worker nodes has to be added to the targetgroup of network loadbalancer(NLB) once joins to the kops k8s cluster. NLB has to be refresh once new worker node joined to the Kubernetes cluster.

How to reproduce it (as minimally and precisely as possible): After upgrading of kops Kubernetes cluster version from 1.25.x to 1.27.x. Anything else we need to know?:

Environment:

/kind bug

k8s-ci-robot commented 7 months ago

This issue is currently awaiting triage.

If cloud-provider-aws contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 2 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/cloud-provider-aws/issues/824#issuecomment-2156424442): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.