Closed wo9999999999 closed 7 months ago
/assign
I was able to reproduce this, the issue is that we're trying to bring up two CSI pods but we only have two nodes (one control plane, one node), and one of them is tainted.
I was having the same issue. It looks like it has been fixed and is part of the latest alpha release v1.29.0-alpha.1. I was able to get 2 node (1 master, 1 worker) cluster up using that version.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/kind bug
1. What
kops
version are you running? The commandkops version
, will display this information. 1.27.02. What Kubernetes version are you running?
kubectl version
will print the version if a cluster is running or provide the Kubernetes version specified as akops
flag. Client Version: v1.28.1 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce33. What cloud provider are you using? aws
4. What commands did you run? What is the simplest way to reproduce this issue? kops create cluster
5. What happened after the commands executed? cluster validation keep fail
6. What did you expect to happen? start cluster successfully
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest. You may want to remove your cluster name and other sensitive information.8. Please run the commands with most verbose logging by adding the
-v 10
flag. Paste the logs into this report, or in a gist and provide the gist link here.9. Anything else do we need to know? describe the pending pod