kubernetes / kubernetes

Production-Grade Container Scheduling and Management
https://kubernetes.io
Apache License 2.0
111.39k stars 39.74k forks source link

GCE Node Controller is very inefficient with multiple zones #59893

Open mml opened 6 years ago

mml commented 6 years ago

Looking at getInstanceByName https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/cloudprovider/providers/gce/gce_instances.go#L461, we do a very inefficient search for instances by name. This is free when there is only one zone, and cheap if the product (number of zones) x (number of nodes) is small, but it can get out of hand quickly with large, multi-zone clusters.

It not only wastes effort, but it generates a noisy signal on the cloud provider end, as kube-controller-manager starts racking up huge numbers of 404s.

Can we make this more efficient? At the very least, can we please stop logging these as errors? They are expected. https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/cloudprovider/providers/gce/gce_instances.go#L483

cc @cheftako /kind bug /sig node /area nodecontroller

yujuhong commented 6 years ago

Can we make this more efficient? At the very least, can we please stop logging these as errors? They are expected. https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/cloudprovider/providers/gce/gce_instances.go#L483

The error message has already removed by #54720 in 1.9

As for making this more efficient, I looked at the code briefly, and it seems the best way to do this is stop relying on using "node name" to find information about the GCE instance in the cloudprovider code. The preferred way should be using the ProviderID (aka InstanceID), which embeds the zonal information. Most of the code (if not all) in the node "lifecycle" controller only fall back to using names on error. I did notice that the node IPAM controller relies on the node name more. +@bowei for the code in the IPAM controller.

fejta-bot commented 6 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 6 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale

fejta-bot commented 6 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

mml commented 6 years ago

/reopen

k8s-ci-robot commented 6 years ago

@mml: Reopening this issue.

In response to [this](https://github.com/kubernetes/kubernetes/issues/59893#issuecomment-422944692): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
mml commented 6 years ago

/remove-lifecycle rotten

mml commented 6 years ago

@cheftako relevant to your interests.

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

bowei commented 5 years ago

/lifecycle frozen

cheftako commented 3 years ago

/cc @jpbetz @cheftako /triage accepted

SergeyKanzhelev commented 3 years ago

/cc

@cheftako do you have plans to improve this by storing instanceID? I'm interested from the perspective of better support the PVMs on GKE. When it is being recreated fast with the same name, it creates some confusion for the cluster.

adisky commented 3 years ago

This issue look like it is more related to GCE/GKE and cloud provider /remove-sig node

k8s-triage-robot commented 1 year ago

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

k8s-ci-robot commented 1 year ago

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.