Closed gmembre-zenika closed 4 years ago
What is a "ControllerRevision" API? First time I'm hearing it.
Instead of describe
, try doing get -o=yaml
, you'll probably see that there's no ownerReferences
set on the Pod pointing to the ControllerRevision object.
We use ownerReferences to come up with the tree.
So I suspect whatever workload API you're using is not setting those ownerReferences. Maybe it doesn't need, maybe DaemonSets have different requirements. (I've also never seen DaemonSet owned by a "Service" object, weird. I'm suspecting it's a k3s thing.)
I don't know what is a "ControllerRevision" :'(
Here is the output :
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-01-08T13:05:43Z"
generateName: k3d-demo-deployment-7b78fd74d5-
labels:
app: k3d-demo
app.kubernetes.io/managed-by: skaffold-v1.1.0
pod-template-hash: 7b78fd74d5
skaffold.dev/builder: local
skaffold.dev/cleanup: "true"
skaffold.dev/deployer: kubectl
skaffold.dev/docker-api-version: "1.29"
skaffold.dev/run-id: 9e3afc5c-d0af-47a9-b210-67199a7c9df4
skaffold.dev/tag-policy: git-commit
skaffold.dev/tail: "true"
name: k3d-demo-deployment-7b78fd74d5-nlmwc
namespace: k3d-demo
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: k3d-demo-deployment-7b78fd74d5
uid: 19d07dd5-9510-46df-9bb5-40c497455347
resourceVersion: "3403554"
selfLink: /api/v1/namespaces/k3d-demo/pods/k3d-demo-deployment-7b78fd74d5-nlmwc
uid: fba85ce7-8c53-4a03-95e2-9e6ba6f52224
spec:
containers:
- image: containous/whoami
imagePullPolicy: Always
name: k3d-demo
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 10m
memory: 32Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-4bbnq
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: XXXXXXXXXXXXX
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-4bbnq
secret:
defaultMode: 420
secretName: default-token-4bbnq
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-01-08T13:05:43Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-01-08T13:05:47Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2020-01-08T13:05:47Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-01-08T13:05:43Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://3b81a0f0d1e8e81726ef987b3983d414e8d7a892e49cd426decd0f26c592685a
image: containous/whoami:latest
imageID: docker-pullable://containous/whoami@sha256:c0d68a0f9acde95c5214bd057fd3ff1c871b2ef12dae2a9e2d2a3240fdd9214b
lastState: {}
name: k3d-demo
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2020-01-08T13:05:46Z"
hostIP: YYYYYYYYYYY
phase: Running
podIP: XXXXXXXXXXXX
podIPs:
- ip: XXXXXXXXXXXXXX
qosClass: Burstable
startTime: "2020-01-08T13:05:43Z"
Can you please trace them recursively. Next, look at the replicaset and see if it has owners.
Closing since no activity. If you can find time to get back on investigating I'll reopen.
Hi,
thanks for you work :muscle:
I have a fresh installed k8s cluster installed with k3d and when lauching kubectl-tree, my pods are not listed except the pause one :
here is the pods running :
here is an inspect of one of the missing pod :
I expect my pods listed under
ControllerRevision/svclb-k3d-demo-6b8cb54945
but I may be wrong. What do you think ?Regards