Open mml opened 6 years ago
Can we make this more efficient? At the very least, can we please stop logging these as errors? They are expected. https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/cloudprovider/providers/gce/gce_instances.go#L483
The error message has already removed by #54720 in 1.9
As for making this more efficient, I looked at the code briefly, and it seems the best way to do this is stop relying on using "node name" to find information about the GCE instance in the cloudprovider code. The preferred way should be using the ProviderID (aka InstanceID), which embeds the zonal information. Most of the code (if not all) in the node "lifecycle" controller only fall back to using names on error. I did notice that the node IPAM controller relies on the node name more. +@bowei for the code in the IPAM controller.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
/reopen
@mml: Reopening this issue.
/remove-lifecycle rotten
@cheftako relevant to your interests.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/lifecycle frozen
/cc @jpbetz @cheftako /triage accepted
/cc
@cheftako do you have plans to improve this by storing instanceID? I'm interested from the perspective of better support the PVMs on GKE. When it is being recreated fast with the same name, it creates some confusion for the cluster.
This issue look like it is more related to GCE/GKE and cloud provider /remove-sig node
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
/triage accepted
(org members only)/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Looking at getInstanceByName https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/cloudprovider/providers/gce/gce_instances.go#L461, we do a very inefficient search for instances by name. This is free when there is only one zone, and cheap if the product
(number of zones) x (number of nodes)
is small, but it can get out of hand quickly with large, multi-zone clusters.It not only wastes effort, but it generates a noisy signal on the cloud provider end, as kube-controller-manager starts racking up huge numbers of 404s.
Can we make this more efficient? At the very least, can we please stop logging these as errors? They are expected. https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/cloudprovider/providers/gce/gce_instances.go#L483
cc @cheftako /kind bug /sig node /area nodecontroller