aylei / kubectl-debug

This repository is no longer maintained, please checkout https://github.com/JamesTGrant/kubectl-debug.
Apache License 2.0
2.31k stars 303 forks source link

Plugin stuck creating exec stream #129

Open hanlins opened 4 years ago

hanlins commented 4 years ago

Using latest plugin, k8s v1.19.1 and v1.18.6. Stuck at exec stream creation:

➜  ~ kubectl debug test -v 9
2020/10/21 00:37:32 Getting user name from default context 'test-4-admin@test-4'
2020/10/21 00:37:32 User name 'test-4-admin' received from context
kubectl-debug 2020/10/21 00:37:32 cmd.go:771: Getting id from containerStatus {Name:test State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2020-10-20 23:51:04 -0700 PDT,} Terminated:nil} LastTerminationState:
xxx
POST http://10.186.59.166:10027/api/v1/debug?authStr=&command=%5B%22bash%22%5D&container=containerd%3A%2F%2F4586b4d2e030426b0a3e8a753aac2cdae4f2dd8f68812bb69653c7b0ce81c6b5&hostname=hanlins-a01.vmware.com&image=docker.io%2Fnicolaka%2Fnetshoot%3Alatest&lxcfsEnabled=false&registrySkipTLS=false&username=test-4-admin&verbosity=9
kubectl-debug 2020/10/21 00:37:32 cmd.go:811: Creating exec Stream

On the agent side, got a timeout:

➜  kubectl-debug (master) ✔ kubectl logs debug-agent-w4bnh -f

2020/10/21 07:37:32 receive debug request
2020/10/21 07:37:32 http: superfluous response.WriteHeader call from k8s.io/apimachinery/pkg/util/httpstream/spdy.responseUpgrader.UpgradeResponse (upgrade.go:91)
E1021 07:37:47.473244   20211 httpstream.go:184] timed out waiting for client to create streams

From kubelet log, we can see the target pod (test) and agent being created but didn't see the reply:

Oct 21 06:50:43 test-4-md-0-74fb968bd8-cfqr9 kubelet[752]: I1021 06:50:43.657138     752 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume
 started for volume "default-token-gnpv8" (UniqueName: "kubernetes.io/secret/7b7ffa74-9b85-4fa6-bbe8-b6eb3cf3ea93-default-token-gnpv8") pod "test" (UID: "7b
7ffa74-9b85-4fa6-bbe8-b6eb3cf3ea93")
Oct 21 06:55:19 test-4-md-0-74fb968bd8-cfqr9 kubelet[752]: I1021 06:55:19.715182     752 topology_manager.go:233] [topologymanager] Topology Admit Handler
Oct 21 06:55:19 test-4-md-0-74fb968bd8-cfqr9 kubelet[752]: I1021 06:55:19.855381     752 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume
 started for volume "lxcfs" (UniqueName: "kubernetes.io/host-path/5f58ecd2-adac-4c8d-8f9a-7d1896ec5cb8-lxcfs") pod "debug-agent-w4bnh" (UID: "5f58ecd2-adac-
4c8d-8f9a-7d1896ec5cb8")
Oct 21 06:55:19 test-4-md-0-74fb968bd8-cfqr9 kubelet[752]: I1021 06:55:19.855872     752 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume
 started for volume "runrunc" (UniqueName: "kubernetes.io/host-path/5f58ecd2-adac-4c8d-8f9a-7d1896ec5cb8-runrunc") pod "debug-agent-w4bnh" (UID: "5f58ecd2-a
dac-4c8d-8f9a-7d1896ec5cb8")
Oct 21 06:55:19 test-4-md-0-74fb968bd8-cfqr9 kubelet[752]: I1021 06:55:19.855971     752 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume
 started for volume "default-token-gnpv8" (UniqueName: "kubernetes.io/secret/5f58ecd2-adac-4c8d-8f9a-7d1896ec5cb8-default-token-gnpv8") pod "debug-agent-w4b
nh" (UID: "5f58ecd2-adac-4c8d-8f9a-7d1896ec5cb8")
Oct 21 06:55:19 test-4-md-0-74fb968bd8-cfqr9 kubelet[752]: I1021 06:55:19.856100     752 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume
 started for volume "docker" (UniqueName: "kubernetes.io/host-path/5f58ecd2-adac-4c8d-8f9a-7d1896ec5cb8-docker") pod "debug-agent-w4bnh" (UID: "5f58ecd2-ada
c-4c8d-8f9a-7d1896ec5cb8")
Oct 21 06:55:19 test-4-md-0-74fb968bd8-cfqr9 kubelet[752]: I1021 06:55:19.856283     752 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume
 started for volume "runcontainerd" (UniqueName: "kubernetes.io/host-path/5f58ecd2-adac-4c8d-8f9a-7d1896ec5cb8-runcontainerd") pod "debug-agent-w4bnh" (UID:
 "5f58ecd2-adac-4c8d-8f9a-7d1896ec5cb8")
Oct 21 06:55:19 test-4-md-0-74fb968bd8-cfqr9 kubelet[752]: I1021 06:55:19.856312     752 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume
 started for volume "cgroup" (UniqueName: "kubernetes.io/host-path/5f58ecd2-adac-4c8d-8f9a-7d1896ec5cb8-cgroup") pod "debug-agent-w4bnh" (UID: "5f58ecd2-ada
c-4c8d-8f9a-7d1896ec5cb8")
Oct 21 06:55:19 test-4-md-0-74fb968bd8-cfqr9 kubelet[752]: I1021 06:55:19.856594     752 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume
 started for volume "vardata" (UniqueName: "kubernetes.io/host-path/5f58ecd2-adac-4c8d-8f9a-7d1896ec5cb8-vardata") pod "debug-agent-w4bnh" (UID: "5f58ecd2-a
dac-4c8d-8f9a-7d1896ec5cb8")
Oct 21 07:43:10 test-4-md-0-74fb968bd8-cfqr9 kubelet[752]: I1021 07:43:10.888639     752 log.go:181] http: superfluous response.WriteHeader call from k8s.io
/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader (httplog.go:217)

It seems to be related to kubelet on attaching container, not sure if anyone experienced the same issue.