Closed datavisorhenryzhao closed 6 months ago
This issue is currently awaiting triage.
If cloud-provider-aws contributors determine this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
The node.spec.provider is "aws". But aws-cloud-provider expected 'providerID: aws:///us-east-1a/i-xxxx'
I find when i start kubelet with "--cloud-provider=external" on master and worker nodes, the node.spec.providerId looks like "aws:///region/instnaceid". And aws cloud controller will not crash
@datavisorhenryzhao Are you still seeing this issue?
the crash issue has been fixed in https://github.com/kubernetes/cloud-provider-aws/pull/605. i will work in backporting the fix to older versions
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This is resolved across all the active release branches.
What happened: k8s cluster: 1.27.6
master node: kubeadm_config.yaml, and run kubeadm join
worker node: run kubeadm join
cluster info
aws cloud controller crash:
What you expected to happen: aws-cloud-provider should not crash. How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
): 1.27.6uname -a
): Linux ip-10-142-1-183 6.2.0-1015-aws #15~22.04.1-Ubuntu SMP Fri Oct/kind bug