kubernetes / kubectl

Issue tracker and mirror of kubectl code
Apache License 2.0
2.87k stars 920 forks source link

`kubectl port-forward` for Service should fail if no Pods are Ready #1416

Open geekofalltrades opened 1 year ago

geekofalltrades commented 1 year ago

What happened: kubectl port-forward to a Service when no Pods in the Service were Ready. kubectl appears to port-forward to the first Pod selected by the Service when they are sorted alphabetically.

What you expected to happen: The port-forward should fail. When a Pod is not ready, a Service will not load balance to it: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

When a Pod is not ready, it is removed from Service load balancers.

Failing the port-forward to a Service in which no Pods are ready would be in line with the Service behavior, and would be less unexpected.

In my case, I just uncovered a bug in which we are using a port-forward to bootstrap a Deployment via its Service. Because kubectl port-forward was succeeding, we were assuming we had reached a Pod that was ready to receive requests, but actually, we had a race condition where we were reaching a Pod that was sometimes not yet Ready, which failed our bootstrapping script.

How to reproduce it (as minimally and precisely as possible):

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        readinessProbe:
          exec:
            command:
            - /bin/false
          initialDelaySeconds: 5
          periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
$ kubectl get pod -l app=nginx
NAME                     READY   STATUS    RESTARTS   AGE
nginx-57b76f6dd8-868dq   0/1     Running   0          12s
nginx-57b76f6dd8-bjqbg   0/1     Running   0          12s
nginx-57b76f6dd8-kkmns   0/1     Running   0          12s

$ kubectl get service nginx
NAME    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
nginx   ClusterIP   10.228.5.176   <none>        80/TCP    3m49s

$ kubectl get endpoints -l app=nginx
NAME    ENDPOINTS   AGE
nginx               4m33s

$ kubectl port-forward service/nginx 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
$ curl localhost:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
# Logs of the first Pod, alphabetically:
$ kubectl logs nginx-57b76f6dd8-868dq 
<startup logs...>
127.0.0.1 - - [19/Apr/2023:19:02:01 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.68.0" "-"
# Behavior from within the cluster differs (this is a shell in an alpine Pod in the same namespace):
/ # dig +short +search nginx
10.228.5.176
/ # curl nginx
curl: (7) Failed to connect to nginx port 80 after 1029 ms: Couldn't connect to server

Anything else we need to know?:

Environment:

eddiezane commented 1 year ago

/triage accepted /priority backlog /assign

k8s-triage-robot commented 6 months ago

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten