kubernetes-sigs / cluster-capacity

Cluster capacity analysis
Apache License 2.0
439 stars 103 forks source link

Not able to detect the failure message for each node #137

Closed peter-wangxu closed 3 years ago

peter-wangxu commented 3 years ago

./cluster-capacity --kubeconfig=k8s.kubeconfig --podspec 1.yaml --verbose test-0 pod requirements:

The cluster can schedule 0 instance(s) of the pod test-0.

Termination reason: Unschedulable: 0/21 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had taint {node.k8s.test/lifecycle: offline}, that the pod didn't tolerate, 18 node(s) didn't match Pod's node affinity.

ingvagabund commented 3 years ago

Kube-scheduler aggregates the node reasons. Currently, there's no way to tell a failure message for a specific node. /close

k8s-ci-robot commented 3 years ago

@ingvagabund: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/cluster-capacity/issues/137#issuecomment-822379660): >Kube-scheduler aggregates the node reasons. Currently, there's no way to tell a failure message for a specific node. >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.