Open geekofalltrades opened 1 year ago
/triage accepted /priority backlog /assign
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
/triage accepted
(org members only)/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
What happened:
kubectl port-forward
to a Service when no Pods in the Service were Ready.kubectl
appears to port-forward to the first Pod selected by the Service when they are sorted alphabetically.What you expected to happen: The port-forward should fail. When a Pod is not ready, a Service will not load balance to it: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
Failing the port-forward to a Service in which no Pods are ready would be in line with the Service behavior, and would be less unexpected.
In my case, I just uncovered a bug in which we are using a port-forward to bootstrap a Deployment via its Service. Because
kubectl port-forward
was succeeding, we were assuming we had reached a Pod that was ready to receive requests, but actually, we had a race condition where we were reaching a Pod that was sometimes not yet Ready, which failed our bootstrapping script.How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):cat /etc/os-release
): Ubuntu 20.04.6 LTS (Focal Fossa)