jetstack / kube-oidc-proxy

Reverse proxy to authenticate to managed Kubernetes API servers via OIDC.
https://jetstack.io
Apache License 2.0
478 stars 92 forks source link

Create/Attach pod in single command: Timed out waiting for condition #191

Open brokencode64 opened 3 years ago

brokencode64 commented 3 years ago

I am running into a strange issu where if I try to create a pod and attach to it in a single command I get an error, but I can attach to the container manually just fine.

This command: kubectl run debug-pod --image=registry/debug --restart=Never --namespace=default --image-pull-policy=Always -i --tty --attach --rm

Results in:

E0322 15:06:02.226007  158207 reflector.go:138] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.Pod: Get "https://kube-oidc-proxy.[doamin]/api/v1/namespaces/default/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Ddebug-pod&resourceVersion=308783598&timeout=9m32s&timeoutSeconds=572&watch=true": stream error: stream ID 5; INTERNAL_ERROR
pod "debug-pod" deleted
error: timed out waiting for the condition

However I can do something like this it works just fine: kubectl exec -it [pod-name] sh

Not seeing any errors that seem related in kube-oidc-proxy, but I am sometimes getting these messages:

2021-03-22 14:13:33.937677 I | httputil: ReverseProxy read error during body copy: unexpected EOF
E0322 14:13:35.414634       1 handlers.go:208] unknown error ([ip]:24968): dial tcp [ip]:443: connect: connection refused
Smana commented 3 years ago

I've got the exact same behavior, kubectl run -ti ... doesn't work at all. We have to use kubectl create ...

hayesgm commented 1 year ago

From this thread: https://community.kodekloud.com/t/anyone-know-what-could-be-the-reason-for-this-related-to-lab2-q7-nslookup-co/20254 adding --restart=Never fixes this issue for me.

wvxvw commented 1 year ago

Kubernetes 1.26. Adding --restart=Never has no effect. Also, it was in OPs listing already.

jli commented 1 year ago

For me, I got the message after kubectl run was running for 1 minute. This is because I was pulling a very large image, which took multiple minutes. I used --pod-running-timeout 7m so that kubectl waits for longer, and it worked for me. You can check kubectl describe pod <pod-name> to see how long it took from when the pod was scheduled to when the container started to figure out the timeout value.