ahmetb / kubectl-tree

kubectl plugin to browse Kubernetes object hierarchies as a tree 🎄 (star the repo if you are using)
Apache License 2.0
2.95k stars 120 forks source link

Pods missing in tree #19

Closed gmembre-zenika closed 4 years ago

gmembre-zenika commented 4 years ago

Hi,

thanks for you work :muscle:

I have a fresh installed k8s cluster installed with k3d and when lauching kubectl-tree, my pods are not listed except the pause one :

$ kubectl tree service k3d-demo
NAMESPACE  NAME                                              READY  REASON  AGE
k3d-demo   Service/k3d-demo                                  -              124m
k3d-demo   └─DaemonSet/svclb-k3d-demo                        -              124m
k3d-demo     ├─ControllerRevision/svclb-k3d-demo-6b8cb54945  -              124m
k3d-demo     └─Pod/svclb-k3d-demo-4bggv                      True           124m

here is the pods running :

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
svclb-k3d-demo-4bggv                   1/1     Running   0          128m
k3d-demo-deployment-7b78fd74d5-nlmwc   1/1     Running   0          128m
k3d-demo-deployment-7b78fd74d5-p7dmk   1/1     Running   0          128m
k3d-demo-deployment-7b78fd74d5-9rtbd   1/1     Running   0          128m

here is an inspect of one of the missing pod :

$ kubectl describe pods/k3d-demo-deployment-7b78fd74d5-nlmwc
Name:         k3d-demo-deployment-7b78fd74d5-nlmwc
Namespace:    k3d-demo
Priority:     0
Node:         YYYYY/ZZZZZZZZZZZ
Start Time:   Wed, 08 Jan 2020 14:05:43 +0100
Labels:       app=k3d-demo
              app.kubernetes.io/managed-by=skaffold-v1.1.0
              pod-template-hash=7b78fd74d5
              skaffold.dev/builder=local
              skaffold.dev/cleanup=true
              skaffold.dev/deployer=kubectl
              skaffold.dev/docker-api-version=1.29
              skaffold.dev/run-id=9e3afc5c-d0af-47a9-b210-67199a7c9df4
              skaffold.dev/tag-policy=git-commit
              skaffold.dev/tail=true
Annotations:  <none>
Status:       Running
IP:           XX.XX.XX.XX
IPs:
  IP:           XX.XX.XX.XX
Controlled By:  ReplicaSet/k3d-demo-deployment-7b78fd74d5
Containers:
  k3d-demo:
    Container ID:   docker://3b81a0f0d1e8e81726ef987b3983d414e8d7a892e49cd426decd0f26c592685a
    Image:          containous/whoami
    Image ID:       docker-pullable://containous/whoami@sha256:c0d68a0f9acde95c5214bd057fd3ff1c871b2ef12dae2a9e2d2a3240fdd9214b
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 08 Jan 2020 14:05:46 +0100
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  128Mi
    Requests:
      cpu:        10m
      memory:     32Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4bbnq (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-4bbnq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4bbnq
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

I expect my pods listed under ControllerRevision/svclb-k3d-demo-6b8cb54945 but I may be wrong. What do you think ?

Regards

ahmetb commented 4 years ago

What is a "ControllerRevision" API? First time I'm hearing it.

Instead of describe, try doing get -o=yaml, you'll probably see that there's no ownerReferences set on the Pod pointing to the ControllerRevision object.

We use ownerReferences to come up with the tree.

So I suspect whatever workload API you're using is not setting those ownerReferences. Maybe it doesn't need, maybe DaemonSets have different requirements. (I've also never seen DaemonSet owned by a "Service" object, weird. I'm suspecting it's a k3s thing.)

gmembre-zenika commented 4 years ago

I don't know what is a "ControllerRevision" :'(

Here is the output :

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2020-01-08T13:05:43Z"
  generateName: k3d-demo-deployment-7b78fd74d5-
  labels:
    app: k3d-demo
    app.kubernetes.io/managed-by: skaffold-v1.1.0
    pod-template-hash: 7b78fd74d5
    skaffold.dev/builder: local
    skaffold.dev/cleanup: "true"
    skaffold.dev/deployer: kubectl
    skaffold.dev/docker-api-version: "1.29"
    skaffold.dev/run-id: 9e3afc5c-d0af-47a9-b210-67199a7c9df4
    skaffold.dev/tag-policy: git-commit
    skaffold.dev/tail: "true"
  name: k3d-demo-deployment-7b78fd74d5-nlmwc
  namespace: k3d-demo
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: k3d-demo-deployment-7b78fd74d5
    uid: 19d07dd5-9510-46df-9bb5-40c497455347
  resourceVersion: "3403554"
  selfLink: /api/v1/namespaces/k3d-demo/pods/k3d-demo-deployment-7b78fd74d5-nlmwc
  uid: fba85ce7-8c53-4a03-95e2-9e6ba6f52224
spec:
  containers:
  - image: containous/whoami
    imagePullPolicy: Always
    name: k3d-demo
    ports:
    - containerPort: 80
      protocol: TCP
    resources:
      limits:
        cpu: 500m
        memory: 128Mi
      requests:
        cpu: 10m
        memory: 32Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-4bbnq
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: XXXXXXXXXXXXX
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-4bbnq
    secret:
      defaultMode: 420
      secretName: default-token-4bbnq
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-01-08T13:05:43Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2020-01-08T13:05:47Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2020-01-08T13:05:47Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2020-01-08T13:05:43Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://3b81a0f0d1e8e81726ef987b3983d414e8d7a892e49cd426decd0f26c592685a
    image: containous/whoami:latest
    imageID: docker-pullable://containous/whoami@sha256:c0d68a0f9acde95c5214bd057fd3ff1c871b2ef12dae2a9e2d2a3240fdd9214b
    lastState: {}
    name: k3d-demo
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2020-01-08T13:05:46Z"
  hostIP: YYYYYYYYYYY
  phase: Running
  podIP: XXXXXXXXXXXX
  podIPs:
  - ip: XXXXXXXXXXXXXX
  qosClass: Burstable
  startTime: "2020-01-08T13:05:43Z"
ahmetb commented 4 years ago

Can you please trace them recursively. Next, look at the replicaset and see if it has owners.

ahmetb commented 4 years ago

Closing since no activity. If you can find time to get back on investigating I'll reopen.