Closed raghavan-arvind closed 1 month ago
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
I'm happy to work on this but I would need some feedback:
kubectl get pod
isn't exactly made for programmatic consumption but I wouldn't be surprised if many people are writing scripts like kubectl get pod | grep <pod name> | cut -d' ' -f2
. Could you please provide the reproducing steps?. Because I'm not sure kubectl get pods
prints containers are ready when containers are actually failing.
Furthermore, I don't think we'd want to change this output as it is clearly not backwards compatible.
Arda is right, adding another value here is confusing in future, maybe you can ask about why the pod is not ready in your'e scenario and how to make the api to flag it as not running. Which i believe you can do that by adding some kind of probe (if applicable).
Does the existing -o wide
output with the READINESS GATES
column already provide this information as mentioned here?
kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE READINESS GATES
nginx-test-5744b9ff84-7ftl9 1/1 Running 0 81s 10.1.2.3 ip-10-1-2-3.ec2.internal 0/1
/close Because this information is available in the wide output format (and thus the describe output as well) I'm going to close this since it is possible to get the information.
@mpuckett159: Closing this issue.
When you run
kubectl get pod
, you get output like:which implies that a pod is ready if all of the containers are ready. However, this can be misleading in scenarios where other things impact pod readiness. For example, the AWS Load Balancer Controller supports these pod readiness gates: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/deploy/pod_readiness_gate/. If these are failing, the kubectl output will show all containers ready but the replicaset/deployment will show the pod not ready. This makes it look like it's kubernetes itself which is not in a consistent state.
If we could instead show:
Or find another way to concisely convey this information that would help this issue