kubernetes-sigs / aws-efs-csi-driver

CSI Driver for Amazon EFS https://aws.amazon.com/efs/
Apache License 2.0
720 stars 552 forks source link

Unable to access persistentVolumeClaim (aws-efs storageclass) in pods after the restart of aws-efs-csi-driver. #973

Closed adeelahmadch closed 7 months ago

adeelahmadch commented 1 year ago

/kind bug

What happened? Pods that are using aws-efs based persistentVolumeClaim, can not access the mount path whenever aws efs csi driver is restarted or re-deployed especially efs-csi-node component. To fix this issue you have to restart all pods that are using aws-efs whenever there is restart of aws-efs-csi-driver.

What you expected to happen? When aws-efs based pvc is mounted successfully in pods, restart of aws efs csi driver should not impact the current running pods. Volume should be accessible to pods irrespective of the restarts of aws efs csi driver.

How to reproduce it (as minimally and precisely as possible)?

Anything else we need to know?:

Environment

Please also attach debug logs to help us better diagnose

Log file Path: /var/log/amazon/efs/mount-watchdog.log

2023-03-23 14:24:51 UTC - INFO - amazon-efs-mount-watchdog, version 1.34.5, is enabled and started
2023-03-23 14:24:51 UTC - WARNING - TLS tunnel for fs-0e530xx5.var.lib.kubelet.pods.9ae4ee95-17ab-446b-94c5-7c951e4400b3.volumes.kubernetes.io~csi.pvc-6dc4bd9e-9b27-4009-b3de-ce0a87e2e64f.mount.20381 is not running
2023-03-23 14:24:51 UTC - INFO - Starting TLS tunnel: "/usr/bin/stunnel5 /var/run/efs/stunnel-config.fs-0e530xx5.var.lib.kubelet.pods.9ae4ee95-17ab-446b-94c5-7c951e4400b3.volumes.kubernetes.io~csi.pvc-6dc4bd9e-9b27-4009-b3de-ce0a87e2e64f.mount.20381"
2023-03-23 14:24:51 UTC - INFO - Started TLS tunnel, pid: 20
2023-03-23 14:24:52 UTC - WARNING - TLS tunnel for fs-0e530xx5.var.lib.kubelet.pods.9ae4ee95-17ab-446b-94c5-7c951e4400b3.volumes.kubernetes.io~csi.pvc-6dc4bd9e-9b27-4009-b3de-ce0a87e2e64f.mount.20381 is not running
2023-03-23 14:24:52 UTC - INFO - Starting TLS tunnel: "/usr/bin/stunnel5 /var/run/efs/stunnel-config.fs-0e530xx5.var.lib.kubelet.pods.9ae4ee95-17ab-446b-94c5-7c951e4400b3.volumes.kubernetes.io~csi.pvc-6dc4bd9e-9b27-4009-b3de-ce0a87e2e64f.mount.20381"
2023-03-23 14:24:52 UTC - INFO - Started TLS tunnel, pid: 23
2023-03-23 14:24:52 UTC - WARNING - Child TLS tunnel process 20 has exited, returncode=1
2023-03-23 14:24:53 UTC - WARNING - TLS tunnel for fs-0e530xx5.var.lib.kubelet.pods.9ae4ee95-17ab-446b-94c5-7c951e4400b3.volumes.kubernetes.io~csi.pvc-6dc4bd9e-9b27-4009-b3de-ce0a87e2e64f.mount.20381 is not running
2023-03-23 14:24:53 UTC - INFO - Starting TLS tunnel: "/usr/bin/stunnel5 /var/run/efs/stunnel-config.fs-0e530xx5.var.lib.kubelet.pods.9ae4ee95-17ab-446b-94c5-7c951e4400b3.volumes.kubernetes.io~csi.pvc-6dc4bd9e-9b27-4009-b3de-ce0a87e2e64f.mount.20381"
2023-03-23 14:24:53 UTC - INFO - Started TLS tunnel, pid: 26
2023-03-23 14:24:53 UTC - WARNING - Child TLS tunnel process 23 has exited, returncode=1
2023-03-23 14:24:54 UTC - WARNING - TLS tunnel for fs-0e530xx5.var.lib.kubelet.pods.9ae4ee95-17ab-446b-94c5-7c951e4400b3.volumes.kubernetes.io~csi.pvc-6dc4bd9e-9b27-4009-b3de-ce0a87e2e64f.mount.20381 is not running
2023-03-23 14:24:54 UTC - INFO - Starting TLS tunnel: "/usr/bin/stunnel5 /var/run/efs/stunnel-config.fs-0e530xx5.var.lib.kubelet.pods.9ae4ee95-17ab-446b-94c5-7c951e4400b3.volumes.kubernetes.io~csi.pvc-6dc4bd9e-9b27-4009-b3de-ce0a87e2e64f.mount.20381"
2023-03-23 14:24:54 UTC - INFO - Started TLS tunnel, pid: 29
Ashley-wenyizha commented 1 year ago

hi @adeelahmadch

Thanks for reaching out. Could you attach more logs following this instruction? https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/troubleshooting

Thanks!

adeelahmadch commented 1 year ago

Hi @Ashley-wenyizha,

Thank you for looking into it. Since i am running efs-csi-driver in a different name space in my k8s cluster so i captured the output directly after enabling debug logs for efs-csi-driver. I hope i captured all output that is required.

Note: some docker images point to my local repository but they are built from amazon/aws-efs-csi-driver:v1.5.4.

1. kubectl sudo describe pod efs-csi-node-zd4ws -n platform

Name:                 efs-csi-node-zd4ws
Namespace:            platform
Priority:             1000000000
Priority Class Name:  core-platform-components
Service Account:      efs-csi-node-sa
Node:                 ip-10-148-18-78.eu-west-1.compute.internal/10.148.18.78
Start Time:           Fri, 28 Apr 2023 13:11:58 +0200
Labels:               app=efs-csi-node
                      app.kubernetes.io/instance=kustomize
                      app.kubernetes.io/name=aws-efs-csi-driver
                      controller-revision-hash=6fcbf866f8
                      owner=*********
                      service.name=aws-efs-csi-driver
                      pod-template-generation=42
Annotations:          cni.projectcalico.org/containerID: 354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d
                      cni.projectcalico.org/podIP: 100.117.213.205/32
                      cni.projectcalico.org/podIPs: 100.117.213.205/32
                      iam.amazonaws.com/role: sandbox-aws-efs-csi-driver
                      kubectl.kubernetes.io/restartedAt: 2023-03-24T08:26:51+01:00
                      managed.by: kustomize
Status:               Running
IP:                   100.117.213.205
IPs:
  IP:           100.117.213.205
Controlled By:  DaemonSet/efs-csi-node
Containers:
  efs-plugin:
    Container ID:  containerd://8ba54c690e213d4f86ee2b2d1d65aa0bf23aa7fbfc32032887f3418ebc03e07b
    Image:         docker.example.com/aws-efs-csi-driver:bbd459c49a876eac2d79a7ae8f8407fa7e12b101
    Image ID:      docker.example.com/aws-efs-csi-driver@sha256:7eb7f0373b3090a1a25e61f2f75d268b6e5281f34444902951276c4f880948da
    Port:          9809/TCP
    Host Port:     0/TCP
    Args:
      --endpoint=$(CSI_ENDPOINT)
      --logtostderr
      --v=5
      --vol-metrics-opt-in=false
      --vol-metrics-refresh-period=240
      --vol-metrics-fs-rate-limit=5
    State:          Running
      Started:      Fri, 28 Apr 2023 13:11:58 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     10m
      memory:  128Mi
    Requests:
      cpu:     10m
      memory:  50Mi
    Liveness:  http-get http://:healthz/healthz delay=10s timeout=3s period=2s #success=1 #failure=5
    Environment:
      CSI_ENDPOINT:  unix:/csi/csi.sock
    Mounts:
      /csi from plugin-dir (rw)
      /etc/amazon/efs-legacy from efs-utils-config-legacy (rw)
      /var/amazon/efs from efs-utils-config (rw)
      /var/lib/kubelet from kubelet-dir (rw)
      /var/run/efs from efs-state-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfx77 (ro)
  csi-driver-registrar:
    Container ID:  containerd://d116454b06933736e9fe8719f1a984821a101ac0740a5dc25a85ea4ad5f1c7a3
    Image:         k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.6.2
    Image ID:      k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a13bff2ed69af0cf4270f0cf47bdedf75a56c095cd95b91195ae6c713a9b1845
    Port:          <none>
    Host Port:     <none>
    Args:
      --csi-address=$(ADDRESS)
      --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
      --v=5
    State:          Running
      Started:      Fri, 28 Apr 2023 13:11:58 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     10m
      memory:  50Mi
    Requests:
      cpu:     10m
      memory:  50Mi
    Environment:
      ADDRESS:               /csi/csi.sock
      DRIVER_REG_SOCK_PATH:  /var/lib/kubelet/plugins/efs.csi.aws.com/csi.sock
      KUBE_NODE_NAME:         (v1:spec.nodeName)
    Mounts:
      /csi from plugin-dir (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfx77 (ro)
  liveness-probe:
    Container ID:  containerd://30783a8318f7abfdf89fc6056207b9c32ab92dd92677193dc22504a6aa48e485
    Image:         k8s.gcr.io/sig-storage/livenessprobe:v2.8.0
    Image ID:      k8s.gcr.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0
    Port:          <none>
    Host Port:     <none>
    Args:
      --csi-address=/csi/csi.sock
      --health-port=9809
      --v=5
    State:          Running
      Started:      Fri, 28 Apr 2023 13:11:59 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     10m
      memory:  50Mi
    Requests:
      cpu:        10m
      memory:     50Mi
    Environment:  <none>
    Mounts:
      /csi from plugin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfx77 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kubelet-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet
    HostPathType:  Directory
  plugin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/efs.csi.aws.com/
    HostPathType:  DirectoryOrCreate
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry/
    HostPathType:  Directory
  efs-state-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/efs
    HostPathType:  DirectoryOrCreate
  efs-utils-config:
    Type:          HostPath (bare host directory volume)
    Path:          /var/amazon/efs
    HostPathType:  DirectoryOrCreate
  efs-utils-config-legacy:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/amazon/efs
    HostPathType:  DirectoryOrCreate
  kube-api-access-tfx77:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/role=node
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  4m45s  default-scheduler  Successfully assigned platform/efs-csi-node-zd4ws to ip-10-148-18-78.eu-west-1.compute.internal
  Normal  Pulling    4m45s  kubelet            Pulling image "docker.example.com/aws-efs-csi-driver:bbd459c49a876eac2d79a7ae8f8407fa7e12b101"
  Normal  Pulled     4m45s  kubelet            Successfully pulled image "docker.example.com/aws-efs-csi-driver:bbd459c49a876eac2d79a7ae8f8407fa7e12b101" in 106.691875ms
  Normal  Created    4m45s  kubelet            Created container efs-plugin
  Normal  Started    4m45s  kubelet            Started container efs-plugin
  Normal  Pulled     4m45s  kubelet            Container image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.6.2" already present on machine
  Normal  Created    4m45s  kubelet            Created container csi-driver-registrar
  Normal  Started    4m44s  kubelet            Started container csi-driver-registrar
  Normal  Pulled     4m44s  kubelet            Container image "k8s.gcr.io/sig-storage/livenessprobe:v2.8.0" already present on machine
  Normal  Created    4m44s  kubelet            Created container liveness-probe
  Normal  Started    4m44s  kubelet            Started container liveness-probe

2. kubectl sudo get pod efs-csi-node-zd4ws -o yaml -n platform

apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/containerID: 354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d
    cni.projectcalico.org/podIP: 100.117.213.205/32
    cni.projectcalico.org/podIPs: 100.117.213.205/32
    iam.amazonaws.com/role: sandbox-aws-efs-csi-driver
    kubectl.kubernetes.io/restartedAt: "2023-03-24T08:26:51+01:00"
    managed.by: kustomize
  creationTimestamp: "2023-04-28T11:11:57Z"
  generateName: efs-csi-node-
  labels:
    app: efs-csi-node
    app.kubernetes.io/instance: kustomize
    app.kubernetes.io/name: aws-efs-csi-driver
    controller-revision-hash: 6fcbf866f8
    owner: sre-infrastructure
    service.name: aws-efs-csi-driver
    pod-template-generation: "42"
  name: efs-csi-node-zd4ws
  namespace: platform
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: DaemonSet
    name: efs-csi-node
    uid: e3cc35ec-c451-4b57-910b-ef029634bd3d
  resourceVersion: "99867911"
  uid: daa12c33-9575-44e2-b4ba-28479fa3f23f
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchFields:
          - key: metadata.name
            operator: In
            values:
            - ip-10-148-18-78.eu-west-1.compute.internal
  containers:
  - args:
    - --endpoint=$(CSI_ENDPOINT)
    - --logtostderr
    - --v=5
    - --vol-metrics-opt-in=false
    - --vol-metrics-refresh-period=240
    - --vol-metrics-fs-rate-limit=5
    env:
    - name: CSI_ENDPOINT
      value: unix:/csi/csi.sock
    image: docker.example.com/aws-efs-csi-driver:bbd459c49a876eac2d79a7ae8f8407fa7e12b101
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 5
      httpGet:
        path: /healthz
        port: healthz
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 2
      successThreshold: 1
      timeoutSeconds: 3
    name: efs-plugin
    ports:
    - containerPort: 9809
      name: healthz
      protocol: TCP
    resources:
      limits:
        cpu: 10m
        memory: 128Mi
      requests:
        cpu: 10m
        memory: 50Mi
    securityContext:
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/lib/kubelet
      mountPropagation: Bidirectional
      name: kubelet-dir
    - mountPath: /csi
      name: plugin-dir
    - mountPath: /var/run/efs
      name: efs-state-dir
    - mountPath: /var/amazon/efs
      name: efs-utils-config
    - mountPath: /etc/amazon/efs-legacy
      name: efs-utils-config-legacy
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-tfx77
      readOnly: true
  - args:
    - --csi-address=$(ADDRESS)
    - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
    - --v=5
    env:
    - name: ADDRESS
      value: /csi/csi.sock
    - name: DRIVER_REG_SOCK_PATH
      value: /var/lib/kubelet/plugins/efs.csi.aws.com/csi.sock
    - name: KUBE_NODE_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName
    image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.6.2
    imagePullPolicy: IfNotPresent
    name: csi-driver-registrar
    resources:
      limits:
        cpu: 10m
        memory: 50Mi
      requests:
        cpu: 10m
        memory: 50Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /csi
      name: plugin-dir
    - mountPath: /registration
      name: registration-dir
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-tfx77
      readOnly: true
  - args:
    - --csi-address=/csi/csi.sock
    - --health-port=9809
    - --v=5
    image: k8s.gcr.io/sig-storage/livenessprobe:v2.8.0
    imagePullPolicy: IfNotPresent
    name: liveness-probe
    resources:
      limits:
        cpu: 10m
        memory: 50Mi
      requests:
        cpu: 10m
        memory: 50Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /csi
      name: plugin-dir
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-tfx77
      readOnly: true
  dnsConfig:
    options:
    - name: ndots
      value: "1"
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: ip-10-148-18-78.eu-west-1.compute.internal
  nodeSelector:
    kubernetes.io/role: node
  preemptionPolicy: PreemptLowerPriority
  priority: 1000000000
  priorityClassName: core-platform-components
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: efs-csi-node-sa
  serviceAccountName: efs-csi-node-sa
  terminationGracePeriodSeconds: 30
  tolerations:
  - operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/disk-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/pid-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/unschedulable
    operator: Exists
  volumes:
  - hostPath:
      path: /var/lib/kubelet
      type: Directory
    name: kubelet-dir
  - hostPath:
      path: /var/lib/kubelet/plugins/efs.csi.aws.com/
      type: DirectoryOrCreate
    name: plugin-dir
  - hostPath:
      path: /var/lib/kubelet/plugins_registry/
      type: Directory
    name: registration-dir
  - hostPath:
      path: /var/run/efs
      type: DirectoryOrCreate
    name: efs-state-dir
  - hostPath:
      path: /var/amazon/efs
      type: DirectoryOrCreate
    name: efs-utils-config
  - hostPath:
      path: /etc/amazon/efs
      type: DirectoryOrCreate
    name: efs-utils-config-legacy
  - name: kube-api-access-tfx77
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2023-04-28T11:11:58Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2023-04-28T11:11:59Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2023-04-28T11:11:59Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2023-04-28T11:11:58Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://d116454b06933736e9fe8719f1a984821a101ac0740a5dc25a85ea4ad5f1c7a3
    image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.6.2
    imageID: k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a13bff2ed69af0cf4270f0cf47bdedf75a56c095cd95b91195ae6c713a9b1845
    lastState: {}
    name: csi-driver-registrar
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2023-04-28T11:11:58Z"
  - containerID: containerd://8ba54c690e213d4f86ee2b2d1d65aa0bf23aa7fbfc32032887f3418ebc03e07b
    image: docker.example.com/aws-efs-csi-driver:bbd459c49a876eac2d79a7ae8f8407fa7e12b101
    imageID: docker.example.com/aws-efs-csi-driver@sha256:7eb7f0373b3090a1a25e61f2f75d268b6e5281f34444902951276c4f880948da
    lastState: {}
    name: efs-plugin
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2023-04-28T11:11:58Z"
  - containerID: containerd://30783a8318f7abfdf89fc6056207b9c32ab92dd92677193dc22504a6aa48e485
    image: k8s.gcr.io/sig-storage/livenessprobe:v2.8.0
    imageID: k8s.gcr.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0
    lastState: {}
    name: liveness-probe
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2023-04-28T11:11:59Z"
  hostIP: 10.148.18.78
  phase: Running
  podIP: 100.117.213.205
  podIPs:
  - ip: 100.117.213.205
  qosClass: Burstable
  startTime: "2023-04-28T11:11:58Z"

3. From inside of efs-csi-node-zd4ws pod

cat /var/log/amazon/efs/mount-watchdog.log

2023-04-28 11:11:59 UTC - INFO - amazon-efs-mount-watchdog, version 1.34.5, is enabled and started
2023-04-28 11:11:59 UTC - WARNING - TLS tunnel for fs-0e53****.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173 is not running
2023-04-28 11:11:59 UTC - INFO - Starting TLS tunnel: "/usr/bin/stunnel5 /var/run/efs/stunnel-config.fs-0e53****.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173"
2023-04-28 11:11:59 UTC - INFO - Started TLS tunnel, pid: 22
2023-04-28 11:14:45 UTC - INFO - Sending signal SIGKILL(9) to stunnel. PID: 22, group ID: 22
2023-04-28 11:14:45 UTC - WARNING - Connection timeout for /var/lib/kubelet/pods/798b93e9-9aad-467e-a287-b984fe158e49/volumes/kubernetes.io~csi/pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4/mount after 30 sec, SIGKILL has been sent to the potential unhealthy stunnel 22, restarting a new stunnel process.
2023-04-28 11:14:45 UTC - INFO - Starting TLS tunnel: "/usr/bin/stunnel5 /var/run/efs/stunnel-config.fs-0e53****.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173"
2023-04-28 11:14:45 UTC - INFO - Started TLS tunnel, pid: 163
2023-04-28 11:14:45 UTC - WARNING - Child TLS tunnel process 22 has exited, returncode=-9
2023-04-28 11:19:46 UTC - INFO - Sending signal SIGKILL(9) to stunnel. PID: 163, group ID: 163
2023-04-28 11:19:46 UTC - WARNING - Connection timeout for /var/lib/kubelet/pods/798b93e9-9aad-467e-a287-b984fe158e49/volumes/kubernetes.io~csi/pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4/mount after 30 sec, SIGKILL has been sent to the potential unhealthy stunnel 163, restarting a new stunnel process.
2023-04-28 11:19:46 UTC - INFO - Starting TLS tunnel: "/usr/bin/stunnel5 /var/run/efs/stunnel-config.fs-0e53****.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173"
2023-04-28 11:19:46 UTC - INFO - Started TLS tunnel, pid: 437
2023-04-28 11:19:46 UTC - WARNING - Child TLS tunnel process 163 has exited, returncode=-9
2023-04-28 11:24:46 UTC - INFO - Sending signal SIGKILL(9) to stunnel. PID: 437, group ID: 437
2023-04-28 11:24:46 UTC - WARNING - Connection timeout for /var/lib/kubelet/pods/798b93e9-9aad-467e-a287-b984fe158e49/volumes/kubernetes.io~csi/pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4/mount after 30 sec, SIGKILL has been sent to the potential unhealthy stunnel 437, restarting a new stunnel process.
2023-04-28 11:24:46 UTC - INFO - Starting TLS tunnel: "/usr/bin/stunnel5 /var/run/efs/stunnel-config.fs-0e53****.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173"
2023-04-28 11:24:46 UTC - INFO - Started TLS tunnel, pid: 711
2023-04-28 11:24:46 UTC - WARNING - Child TLS tunnel process 437 has exited, returncode=-9

`cat /var/run/efs/fs-0e53**.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173`**

{"cmd": ["/usr/bin/stunnel5", "/var/run/efs/stunnel-config.fs-0e53****.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173"], "files": ["/var/run/efs/stunnel-config.fs-0e53****.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173"], "mount_time": 1682679855.3937714, "mountpoint": "/var/lib/kubelet/pods/798b93e9-9aad-467e-a287-b984fe158e49/volumes/kubernetes.io~csi/pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4/mount", "accessPoint": "fsap-09d28b415e6f6dd59", "mountStateDir": "fs-0e53****.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173+", "commonName": "efs-csi-node-cmtlw", "region": "eu-west-1", "certificateCreationTime": "230428110415Z", "certificate": "/var/run/efs/fs-0e53****.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173+/certificate.pem", "privateKey": "/etc/amazon/efs/privateKey.pem", "fsId": "fs-0e53****", "unmount_count": 0, "last_stunnel_check_time": 1682681056.3202887, "pid": 711}

`cat /var/run/efs/stunnel-config.fs-0e53**.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173`**

fips = no
foreground = quiet
socket = l:SO_REUSEADDR=yes
socket = a:SO_BINDTODEVICE=lo
pid = /var/run/efs/fs-0e53****.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173+/stunnel.pid
[efs]
client = yes
accept = 127.0.0.1:20173
connect = fs-0e53****.efs.eu-west-1.amazonaws.com:2049
sslVersion = TLSv1.2
renegotiation = no
TIMEOUTbusy = 20
TIMEOUTclose = 0
TIMEOUTidle = 70
delay = yes
verify = 2
CAfile = /etc/amazon/efs/efs-utils.crt
cert = /var/run/efs/fs-0e53****.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173+/certificate.pem
key = /etc/amazon/efs/privateKey.pem
checkHost = fs-0e53****.efs.eu-west-1.amazonaws.com

4. kubectl sudo logs efs-csi-node-zd4ws -n platform

Defaulted container "efs-plugin" out of: efs-plugin, csi-driver-registrar, liveness-probe
I0428 11:11:58.912796       1 config_dir.go:88] Creating symlink from '/etc/amazon/efs' to '/var/amazon/efs'
I0428 11:11:58.913406       1 metadata.go:63] getting MetadataService...
I0428 11:11:58.915907       1 metadata.go:68] retrieving metadata from EC2 metadata service
I0428 11:11:58.916876       1 cloud.go:137] EFS Client created using the following endpoint: https://elasticfilesystem.eu-west-1.amazonaws.com
I0428 11:11:58.916893       1 driver.go:84] Node Service capability for Get Volume Stats Not enabled
I0428 11:11:58.916948       1 driver.go:140] Did not find any input tags.
I0428 11:11:58.917102       1 driver.go:113] Registering Node Server
I0428 11:11:58.917113       1 driver.go:115] Registering Controller Server
I0428 11:11:58.917120       1 driver.go:118] Starting efs-utils watchdog
I0428 11:11:58.917176       1 efs_watch_dog.go:221] Skip copying /etc/amazon/efs/efs-utils.conf since it exists already
I0428 11:11:58.917185       1 efs_watch_dog.go:221] Skip copying /etc/amazon/efs/efs-utils.crt since it exists already
I0428 11:11:58.917393       1 driver.go:124] Starting reaper
I0428 11:11:58.917407       1 driver.go:127] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
I0428 11:11:59.963185       1 node.go:306] NodeGetInfo: called with args
I0428 11:12:39.644154       1 node.go:290] NodeGetCapabilities: called with args
I0428 11:13:53.158402       1 node.go:290] NodeGetCapabilities: called with args
I0428 11:15:00.383305       1 node.go:290] NodeGetCapabilities: called with args
I0428 11:16:46.814909       1 node.go:290] NodeGetCapabilities: called with args
I0428 11:18:45.636250       1 node.go:290] NodeGetCapabilities: called with args
I0428 11:20:27.276897       1 node.go:290] NodeGetCapabilities: called with args
I0428 11:22:25.641832       1 node.go:290] NodeGetCapabilities: called with args
I0428 11:24:13.661263       1 node.go:290] NodeGetCapabilities: called with args
I0428 11:25:52.474409       1 node.go:290] NodeGetCapabilities: called with args
I0428 11:27:06.194153       1 node.go:290] NodeGetCapabilities: called with args
I0428 11:28:20.870896       1 node.go:290] NodeGetCapabilities: called with args
I0428 11:29:22.687369       1 node.go:290] NodeGetCapabilities: called with args
I0428 11:30:24.675202       1 node.go:290] NodeGetCapabilities: called with args

5. file system from podtest-nginx-smutjewhere efs volume in mounted /opt/storage

df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                 128.0G      5.2G    122.8G   4% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                     7.6G         0      7.6G   0% /sys/fs/cgroup
/dev/nvme0n1p1          128.0G      5.2G    122.8G   4% /opt/host-shared
/dev/nvme0n1p1          128.0G      5.2G    122.8G   4% /opt/default-init-container
127.0.0.1:/               8.0E      3.1G      8.0E   0% /opt/storage
/dev/nvme0n1p1          128.0G      5.2G    122.8G   4% /etc/hosts
/dev/nvme0n1p1          128.0G      5.2G    122.8G   4% /dev/termination-log
/dev/nvme0n1p1          128.0G      5.2G    122.8G   4% /etc/hostname
/dev/nvme0n1p1          128.0G      5.2G    122.8G   4% /etc/resolv.conf
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                    64.0M     12.0K     64.0M   0% /run/secrets/kubernetes.io/serviceaccount
tmpfs                     7.6G         0      7.6G   0% /proc/acpi
tmpfs                    64.0M         0     64.0M   0% /proc/kcore
tmpfs                    64.0M         0     64.0M   0% /proc/keys
tmpfs                    64.0M         0     64.0M   0% /proc/latency_stats
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                    64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                     7.6G         0      7.6G   0% /sys/firmware

6. /var/log/messages from k8s ec2 node where application pod test-nginx-smutje and efs-csi-node-zd4ws running.

# /var/log/messages

Apr 28 10:40:25 ip-10-148-18-78 kubelet: I0428 10:40:25.348092    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:41:17 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 130590ms.
Apr 28 10:41:25 ip-10-148-18-78 kubelet: I0428 10:41:25.348430    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:42:25 ip-10-148-18-78 kubelet: I0428 10:42:25.348675    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:42:47 ip-10-148-18-78 amazon-ssm-agent: 2023-04-28 10:42:47 INFO EC2RoleProvider Successfully connected with instance profile role credentials
Apr 28 10:42:47 ip-10-148-18-78 amazon-ssm-agent: 2023-04-28 10:42:47 INFO [CredentialRefresher] Next credential rotation will be in 32.09999726776667 minutes
Apr 28 10:42:47 ip-10-148-18-78 dhclient[2510]: DHCPREQUEST on eth0 to 10.148.18.1 port 67 (xid=0x64850070)
Apr 28 10:42:47 ip-10-148-18-78 dhclient[2510]: DHCPACK from 10.148.18.1 (xid=0x64850070)
Apr 28 10:42:47 ip-10-148-18-78 NET: dhclient: Locked /run/dhclient/resolv.lock
Apr 28 10:42:47 ip-10-148-18-78 dhclient[2510]: bound to 10.148.18.78 -- renewal in 1611 seconds.
Apr 28 10:42:47 ip-10-148-18-78 ec2net: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/02:8c:95:08:74:97/local-ipv4s
Apr 28 10:42:47 ip-10-148-18-78 ec2net: [get_meta] Getting token for IMDSv2.
Apr 28 10:42:47 ip-10-148-18-78 ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:8c:95:08:74:97/local-ipv4s
Apr 28 10:42:47 ip-10-148-18-78 ec2net: [remove_aliases] Removing aliases of eth0
Apr 28 10:43:25 ip-10-148-18-78 kubelet: I0428 10:43:25.348797    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:43:28 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 111270ms.
Apr 28 10:44:25 ip-10-148-18-78 kubelet: I0428 10:44:25.348974    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:45:19 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 114810ms.
Apr 28 10:45:25 ip-10-148-18-78 kubelet: I0428 10:45:25.349543    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:46:25 ip-10-148-18-78 kubelet: I0428 10:46:25.349835    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:47:14 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 125100ms.
Apr 28 10:47:25 ip-10-148-18-78 kubelet: I0428 10:47:25.349910    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:48:25 ip-10-148-18-78 kubelet: I0428 10:48:25.350081    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:49:19 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 131950ms.
Apr 28 10:49:25 ip-10-148-18-78 kubelet: I0428 10:49:25.350206    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:50:01 ip-10-148-18-78 systemd: Created slice User Slice of root.
Apr 28 10:50:01 ip-10-148-18-78 systemd: Started Session 16 of user root.
Apr 28 10:50:01 ip-10-148-18-78 systemd: Removed slice User Slice of root.
Apr 28 10:50:25 ip-10-148-18-78 kubelet: I0428 10:50:25.350324    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:51:25 ip-10-148-18-78 kubelet: I0428 10:51:25.350470    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:51:31 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 119480ms.
Apr 28 10:52:25 ip-10-148-18-78 kubelet: I0428 10:52:25.351123    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:53:25 ip-10-148-18-78 kubelet: I0428 10:53:25.351290    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:53:31 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 108600ms.
Apr 28 10:54:25 ip-10-148-18-78 kubelet: I0428 10:54:25.351507    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:55:20 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 118140ms.
Apr 28 10:55:25 ip-10-148-18-78 kubelet: I0428 10:55:25.351861    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:56:25 ip-10-148-18-78 kubelet: I0428 10:56:25.352138    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:57:18 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 110150ms.
Apr 28 10:57:25 ip-10-148-18-78 kubelet: I0428 10:57:25.352955    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:58:25 ip-10-148-18-78 kubelet: I0428 10:58:25.354072    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 10:59:08 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 131330ms.
Apr 28 10:59:25 ip-10-148-18-78 kubelet: I0428 10:59:25.354181    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:00:00 ip-10-148-18-78 systemd: Started Rotate and Compress System Logs.
Apr 28 11:00:01 ip-10-148-18-78 systemd: Created slice User Slice of root.
Apr 28 11:00:01 ip-10-148-18-78 systemd: Started Session 17 of user root.
Apr 28 11:00:01 ip-10-148-18-78 systemd: Removed slice User Slice of root.
Apr 28 11:00:25 ip-10-148-18-78 kubelet: I0428 11:00:25.355038    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:01:01 ip-10-148-18-78 systemd: Created slice User Slice of root.
Apr 28 11:01:01 ip-10-148-18-78 systemd: Started Session 18 of user root.
Apr 28 11:01:01 ip-10-148-18-78 systemd: Removed slice User Slice of root.
Apr 28 11:01:20 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 127680ms.
Apr 28 11:01:25 ip-10-148-18-78 kubelet: I0428 11:01:25.356019    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:02:25 ip-10-148-18-78 kubelet: I0428 11:02:25.357106    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:03:25 ip-10-148-18-78 kubelet: I0428 11:03:25.357638    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:03:27 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 128070ms.
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.680186    3995 kubelet.go:2104] "SyncLoop ADD" source="api" pods=[default/test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4]
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.680245    3995 topology_manager.go:200] "Topology Admit Handler"
Apr 28 11:04:14 ip-10-148-18-78 systemd: Created slice libcontainer container kubepods-pod798b93e9_9aad_467e_a287_b984fe158e49.slice.
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.796711    3995 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-init-container\" (UniqueName: \"kubernetes.io/empty-dir/798b93e9-9aad-467e-a287-b984fe158e49-default-init-container\") pod \"test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4\" (UID: \"798b93e9-9aad-467e-a287-b984fe158e49\") "
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.796764    3995 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-shared\" (UniqueName: \"kubernetes.io/host-path/798b93e9-9aad-467e-a287-b984fe158e49-host-shared\") pod \"test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4\" (UID: \"798b93e9-9aad-467e-a287-b984fe158e49\") "
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.796809    3995 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4\" (UniqueName: \"kubernetes.io/csi/efs.csi.aws.com^fs-0e530bc5::fsap-09d28b415e6f6dd59\") pod \"test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4\" (UID: \"798b93e9-9aad-467e-a287-b984fe158e49\") "
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.796899    3995 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crdcn\" (UniqueName: \"kubernetes.io/projected/798b93e9-9aad-467e-a287-b984fe158e49-kube-api-access-crdcn\") pod \"test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4\" (UID: \"798b93e9-9aad-467e-a287-b984fe158e49\") "
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.898225    3995 reconciler.go:266] "operationExecutor.MountVolume started for volume \"kube-api-access-crdcn\" (UniqueName: \"kubernetes.io/projected/798b93e9-9aad-467e-a287-b984fe158e49-kube-api-access-crdcn\") pod \"test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4\" (UID: \"798b93e9-9aad-467e-a287-b984fe158e49\") "
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.898285    3995 reconciler.go:266] "operationExecutor.MountVolume started for volume \"default-init-container\" (UniqueName: \"kubernetes.io/empty-dir/798b93e9-9aad-467e-a287-b984fe158e49-default-init-container\") pod \"test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4\" (UID: \"798b93e9-9aad-467e-a287-b984fe158e49\") "
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.898314    3995 reconciler.go:266] "operationExecutor.MountVolume started for volume \"host-shared\" (UniqueName: \"kubernetes.io/host-path/798b93e9-9aad-467e-a287-b984fe158e49-host-shared\") pod \"test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4\" (UID: \"798b93e9-9aad-467e-a287-b984fe158e49\") "
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.898353    3995 reconciler.go:266] "operationExecutor.MountVolume started for volume \"pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4\" (UniqueName: \"kubernetes.io/csi/efs.csi.aws.com^fs-0e530bc5::fsap-09d28b415e6f6dd59\") pod \"test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4\" (UID: \"798b93e9-9aad-467e-a287-b984fe158e49\") "
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.898457    3995 operation_generator.go:714] MountVolume.SetUp succeeded for volume "host-shared" (UniqueName: "kubernetes.io/host-path/798b93e9-9aad-467e-a287-b984fe158e49-host-shared") pod "test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" (UID: "798b93e9-9aad-467e-a287-b984fe158e49")
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.898607    3995 operation_generator.go:714] MountVolume.SetUp succeeded for volume "default-init-container" (UniqueName: "kubernetes.io/empty-dir/798b93e9-9aad-467e-a287-b984fe158e49-default-init-container") pod "test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" (UID: "798b93e9-9aad-467e-a287-b984fe158e49")
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.899726    3995 csi_attacher.go:354] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice...
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.899752    3995 operation_generator.go:632] MountVolume.MountDevice succeeded for volume "pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4" (UniqueName: "kubernetes.io/csi/efs.csi.aws.com^fs-0e530bc5::fsap-09d28b415e6f6dd59") pod "test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" (UID: "798b93e9-9aad-467e-a287-b984fe158e49") device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4/globalmount"
Apr 28 11:04:14 ip-10-148-18-78 kubelet: I0428 11:04:14.911854    3995 operation_generator.go:714] MountVolume.SetUp succeeded for volume "kube-api-access-crdcn" (UniqueName: "kubernetes.io/projected/798b93e9-9aad-467e-a287-b984fe158e49-kube-api-access-crdcn") pod "test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" (UID: "798b93e9-9aad-467e-a287-b984fe158e49")
Apr 28 11:04:15 ip-10-148-18-78 kernel: FS-Cache: Loaded
Apr 28 11:04:15 ip-10-148-18-78 kernel: FS-Cache: Netfs 'nfs' registered for caching
Apr 28 11:04:15 ip-10-148-18-78 kernel: Key type dns_resolver registered
Apr 28 11:04:15 ip-10-148-18-78 kernel: NFS: Registering the id_resolver key type
Apr 28 11:04:15 ip-10-148-18-78 kernel: Key type id_resolver registered
Apr 28 11:04:15 ip-10-148-18-78 kernel: Key type id_legacy registered
Apr 28 11:04:15 ip-10-148-18-78 kubelet: I0428 11:04:15.749896    3995 operation_generator.go:714] MountVolume.SetUp succeeded for volume "pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4" (UniqueName: "kubernetes.io/csi/efs.csi.aws.com^fs-0e530bc5::fsap-09d28b415e6f6dd59") pod "test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" (UID: "798b93e9-9aad-467e-a287-b984fe158e49")
Apr 28 11:04:15 ip-10-148-18-78 kubelet: I0428 11:04:15.913610    3995 kuberuntime_manager.go:484] "No sandbox for pod can be found. Need to start a new one" pod="default/test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4"
Apr 28 11:04:16 ip-10-148-18-78 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Apr 28 11:04:16 ip-10-148-18-78 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliceac911869a: link becomes ready
Apr 28 11:04:16 ip-10-148-18-78 kubelet: I0428 11:04:16.347607    3995 kubelet.go:2111] "SyncLoop UPDATE" source="api" pods=[default/test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4]
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:15.979 [INFO][1994] plugin.go 261: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-eth0 test-nginx-smutje-1fa1a4c3-6d6fc845dd- default  798b93e9-9aad-467e-a287-b984fe158e49 99861909 0 2023-04-28 11:04:14 +0000 UTC <nil> <nil> map[critical:false deploy.id:D1FDC92A owner:sre-infrastructure resource.name:test-nginx-smutje-1fa1a4c3 service.name:test-nginx-smutje smutje.release.id:main-11 pod-template-hash:6d6fc845dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default tags.datadoghq.com/env:sandbox tags.datadoghq.com/service:test-nginx-smutje tags.datadoghq.com/version:main-11] map[] [] []  []} {k8s  ip-10-148-18-78.eu-west-1.compute.internal  test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4 eth0 default [] []   [kns.default ksa.default.default] caliceac911869a  []}} ContainerID="f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" Namespace="default" Pod="test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:15.979 [INFO][1994] k8s.go 72: Extracted identifiers for CmdAddK8s ContainerID="f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" Namespace="default" Pod="test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-eth0"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.016 [INFO][2005] ipam_plugin.go 226: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" HandleID="k8s-pod-network.f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" Workload="ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-eth0"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.032 [INFO][2005] ipam_plugin.go 265: Auto assigning IP ContainerID="f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" HandleID="k8s-pod-network.f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" Workload="ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000454300), Attrs:map[string]string{"namespace":"default", "node":"ip-10-148-18-78.eu-west-1.compute.internal", "pod":"test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4", "timestamp":"2023-04-28 11:04:16.016612827 +0000 UTC"}, Hostname:"ip-10-148-18-78.eu-west-1.compute.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil)}
Apr 28 11:04:16 ip-10-148-18-78 containerd: time="2023-04-28T11:04:16Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:355"
Apr 28 11:04:16 ip-10-148-18-78 containerd: time="2023-04-28T11:04:16Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:370"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.032 [INFO][2005] ipam.go 97: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-10-148-18-78.eu-west-1.compute.internal'
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.032 [INFO][2005] ipam.go 596: Looking up existing affinities for host handle="k8s-pod-network.f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.070 [INFO][2005] ipam.go 358: Looking up existing affinities for host host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.150 [INFO][2005] ipam.go 441: Trying affinity for 100.117.213.192/26 host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.163 [INFO][2005] ipam.go 145: Attempting to load block cidr=100.117.213.192/26 host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.193 [INFO][2005] ipam.go 222: Affinity is confirmed and block has been loaded cidr=100.117.213.192/26 host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.193 [INFO][2005] ipam.go 1087: Attempting to assign 1 addresses from block block=100.117.213.192/26 handle="k8s-pod-network.f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.208 [INFO][2005] ipam.go 1564: Creating new handle: k8s-pod-network.f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.238 [INFO][2005] ipam.go 1110: Writing block in order to claim IPs block=100.117.213.192/26 handle="k8s-pod-network.f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.261 [INFO][2005] ipam.go 1123: Successfully claimed IPs: [100.117.213.204/26] block=100.117.213.192/26 handle="k8s-pod-network.f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.261 [INFO][2005] ipam.go 777: Auto-assigned 1 out of 1 IPv4s: [100.117.213.204/26] handle="k8s-pod-network.f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:04:16 ip-10-148-18-78 containerd: time="2023-04-28T11:04:16Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:376"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.261 [INFO][2005] ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[100.117.213.204/26] IPv6=[] ContainerID="f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" HandleID="k8s-pod-network.f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" Workload="ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-eth0"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.263 [INFO][1994] k8s.go 375: Populated endpoint ContainerID="f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" Namespace="default" Pod="test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-eth0", GenerateName:"test-nginx-smutje-1fa1a4c3-6d6fc845dd-", Namespace:"default", SelfLink:"", UID:"798b93e9-9aad-467e-a287-b984fe158e49", ResourceVersion:"99861909", Generation:0, CreationTimestamp:time.Date(2023, time.April, 28, 11, 4, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"critical":"false", "deploy.id":"D1FDC92A", "owner":"sre-infrastructure", "resource.name":"test-nginx-smutje-1fa1a4c3", "service.name":"test-nginx-smutje", "smutje.release.id":"main-11", "pod-template-hash":"6d6fc845dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default", "tags.datadoghq.com/env":"sandbox", "tags.datadoghq.com/service":"test-nginx-smutje", "tags.datadoghq.com/version":"main-11"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-10-148-18-78.eu-west-1.compute.internal", ContainerID:"", Pod:"test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"100.117.213.204/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliceac911869a", MAC:"", Ports:[]v3.EndpointPort(nil)}}
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.263 [INFO][1994] k8s.go 376: Calico CNI using IPs: [100.117.213.204/32] ContainerID="f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" Namespace="default" Pod="test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-eth0"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.263 [INFO][1994] dataplane_linux.go 66: Setting the host side veth name to caliceac911869a ContainerID="f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" Namespace="default" Pod="test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-eth0"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.273 [INFO][1994] dataplane_linux.go 442: Disabling IPv4 forwarding ContainerID="f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" Namespace="default" Pod="test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-eth0"
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.299 [INFO][1994] k8s.go 403: Added Mac, interface name, and active container ID to endpoint ContainerID="f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" Namespace="default" Pod="test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-eth0", GenerateName:"test-nginx-smutje-1fa1a4c3-6d6fc845dd-", Namespace:"default", SelfLink:"", UID:"798b93e9-9aad-467e-a287-b984fe158e49", ResourceVersion:"99861909", Generation:0, CreationTimestamp:time.Date(2023, time.April, 28, 11, 4, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"critical":"false", "deploy.id":"D1FDC92A", "owner":"sre-infrastructure", "resource.name":"test-nginx-smutje-1fa1a4c3", "service.name":"test-nginx-smutje", "smutje.release.id":"main-11", "pod-template-hash":"6d6fc845dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default", "tags.datadoghq.com/env":"sandbox", "tags.datadoghq.com/service":"test-nginx-smutje", "tags.datadoghq.com/version":"main-11"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-10-148-18-78.eu-west-1.compute.internal", ContainerID:"f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177", Pod:"test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"100.117.213.204/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliceac911869a", MAC:"56:a7:30:88:75:61", Ports:[]v3.EndpointPort(nil)}}
Apr 28 11:04:16 ip-10-148-18-78 containerd: 2023-04-28 11:04:16.351 [INFO][1994] k8s.go 477: Wrote updated endpoint to datastore ContainerID="f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177" Namespace="default" Pod="test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-test--nginx--smutje--1fa1a4c3--6d6fc845dd--b9cb4-eth0"
Apr 28 11:04:16 ip-10-148-18-78 containerd: time="2023-04-28T11:04:16.369902902Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177 pid=2046
Apr 28 11:04:16 ip-10-148-18-78 systemd: Started libcontainer container f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177.
Apr 28 11:04:16 ip-10-148-18-78 systemd: Couldn't stat device /dev/char/10:200
Apr 28 11:04:16 ip-10-148-18-78 kubelet: I0428 11:04:16.605984    3995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
Apr 28 11:04:16 ip-10-148-18-78 systemd: Started libcontainer container 35bc2767284dd97ea9f25ffa5105ef467a3a0fa330d859cb6caaf28845013619.
Apr 28 11:04:16 ip-10-148-18-78 systemd: Couldn't stat device /dev/char/10:200
Apr 28 11:04:16 ip-10-148-18-78 containerd: time="2023-04-28T11:04:16.875480580Z" level=error msg="copy shim log" error="read /proc/self/fd/169: file already closed"
Apr 28 11:04:17 ip-10-148-18-78 kubelet: I0428 11:04:17.060009    3995 generic.go:296] "Generic (PLEG): container finished" podID=798b93e9-9aad-467e-a287-b984fe158e49 containerID="35bc2767284dd97ea9f25ffa5105ef467a3a0fa330d859cb6caaf28845013619" exitCode=0
Apr 28 11:04:17 ip-10-148-18-78 kubelet: I0428 11:04:17.060058    3995 kubelet.go:2142] "SyncLoop (PLEG): event for pod" pod="default/test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" event=&{ID:798b93e9-9aad-467e-a287-b984fe158e49 Type:ContainerDied Data:35bc2767284dd97ea9f25ffa5105ef467a3a0fa330d859cb6caaf28845013619}
Apr 28 11:04:17 ip-10-148-18-78 kubelet: I0428 11:04:17.060085    3995 kubelet.go:2142] "SyncLoop (PLEG): event for pod" pod="default/test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" event=&{ID:798b93e9-9aad-467e-a287-b984fe158e49 Type:ContainerStarted Data:f254e5c60f4977901c9a3b46796fdb9ae5c2552780b016204e213f8183cee177}
Apr 28 11:04:17 ip-10-148-18-78 systemd: Started libcontainer container 7626a949e13056a44095875a1a46f0f6e5a203f636f53478049070c2ffff8aed.
Apr 28 11:04:17 ip-10-148-18-78 systemd: Couldn't stat device /dev/char/10:200
Apr 28 11:04:18 ip-10-148-18-78 kubelet: I0428 11:04:18.062091    3995 kubelet.go:2142] "SyncLoop (PLEG): event for pod" pod="default/test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4" event=&{ID:798b93e9-9aad-467e-a287-b984fe158e49 Type:ContainerStarted Data:7626a949e13056a44095875a1a46f0f6e5a203f636f53478049070c2ffff8aed}
Apr 28 11:04:25 ip-10-148-18-78 kubelet: I0428 11:04:25.358282    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:05:25 ip-10-148-18-78 kubelet: I0428 11:05:25.358508    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:05:35 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 119390ms.
Apr 28 11:06:25 ip-10-148-18-78 kubelet: I0428 11:06:25.359298    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:07:25 ip-10-148-18-78 kubelet: I0428 11:07:25.360187    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:07:35 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 120380ms.
Apr 28 11:08:25 ip-10-148-18-78 kubelet: I0428 11:08:25.360416    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:09:25 ip-10-148-18-78 kubelet: I0428 11:09:25.361268    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:09:35 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 128590ms.
Apr 28 11:09:38 ip-10-148-18-78 dhclient[2510]: DHCPREQUEST on eth0 to 10.148.18.1 port 67 (xid=0x64850070)
Apr 28 11:09:38 ip-10-148-18-78 dhclient[2510]: DHCPACK from 10.148.18.1 (xid=0x64850070)
Apr 28 11:09:38 ip-10-148-18-78 NET: dhclient: Locked /run/dhclient/resolv.lock
Apr 28 11:09:38 ip-10-148-18-78 dhclient[2510]: bound to 10.148.18.78 -- renewal in 1776 seconds.
Apr 28 11:09:38 ip-10-148-18-78 ec2net: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/02:8c:95:08:74:97/local-ipv4s
Apr 28 11:09:38 ip-10-148-18-78 ec2net: [get_meta] Getting token for IMDSv2.
Apr 28 11:09:38 ip-10-148-18-78 ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:8c:95:08:74:97/local-ipv4s
Apr 28 11:09:38 ip-10-148-18-78 ec2net: [remove_aliases] Removing aliases of eth0
Apr 28 11:10:01 ip-10-148-18-78 systemd: Created slice User Slice of root.
Apr 28 11:10:01 ip-10-148-18-78 systemd: Started Session 19 of user root.
Apr 28 11:10:01 ip-10-148-18-78 systemd: Removed slice User Slice of root.
Apr 28 11:10:25 ip-10-148-18-78 kubelet: I0428 11:10:25.362193    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:11:25 ip-10-148-18-78 kubelet: I0428 11:11:25.362371    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:11:44 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 119310ms.
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.747002    3995 kubelet.go:2120] "SyncLoop DELETE" source="api" pods=[platform/efs-csi-node-cmtlw]
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.747303    3995 kuberuntime_container.go:723] "Killing container with a grace period" pod="platform/efs-csi-node-cmtlw" podUID=284bce63-1b1a-4bda-b538-1996dce42e3c containerName="efs-plugin" containerID="containerd://a6052f1b7df0ce819ade738c4fe7007e897ceb3c4414d6d162a7be9f65ad95fa" gracePeriod=30
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.747379    3995 kuberuntime_container.go:723] "Killing container with a grace period" pod="platform/efs-csi-node-cmtlw" podUID=284bce63-1b1a-4bda-b538-1996dce42e3c containerName="liveness-probe" containerID="containerd://f467b2d712a83b32044dffa37b1c6fe3cba42b063f49a7ac194693f6b8338b49" gracePeriod=30
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.747384    3995 kuberuntime_container.go:723] "Killing container with a grace period" pod="platform/efs-csi-node-cmtlw" podUID=284bce63-1b1a-4bda-b538-1996dce42e3c containerName="csi-driver-registrar" containerID="containerd://51895e69f6b3bc16cf73f94c67083e8e8b8f273f856d3122a49b71436124bf8f" gracePeriod=30
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.772687    3995 plugin_watcher.go:215] "Removing socket path from desired state cache" path="/var/lib/kubelet/plugins_registry/efs.csi.aws.com-reg.sock"
Apr 28 11:11:56 ip-10-148-18-78 containerd: time="2023-04-28T11:11:56.781352329Z" level=error msg="copy shim log" error="read /proc/self/fd/148: file already closed"
Apr 28 11:11:56 ip-10-148-18-78 containerd: time="2023-04-28T11:11:56.802284464Z" level=error msg="copy shim log" error="read /proc/self/fd/116: file already closed"
Apr 28 11:11:56 ip-10-148-18-78 containerd: time="2023-04-28T11:11:56.837362104Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.847584    3995 generic.go:296] "Generic (PLEG): container finished" podID=284bce63-1b1a-4bda-b538-1996dce42e3c containerID="a6052f1b7df0ce819ade738c4fe7007e897ceb3c4414d6d162a7be9f65ad95fa" exitCode=2
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.847607    3995 generic.go:296] "Generic (PLEG): container finished" podID=284bce63-1b1a-4bda-b538-1996dce42e3c containerID="51895e69f6b3bc16cf73f94c67083e8e8b8f273f856d3122a49b71436124bf8f" exitCode=0
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.847620    3995 generic.go:296] "Generic (PLEG): container finished" podID=284bce63-1b1a-4bda-b538-1996dce42e3c containerID="f467b2d712a83b32044dffa37b1c6fe3cba42b063f49a7ac194693f6b8338b49" exitCode=2
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.847652    3995 kubelet.go:2142] "SyncLoop (PLEG): event for pod" pod="platform/efs-csi-node-cmtlw" event=&{ID:284bce63-1b1a-4bda-b538-1996dce42e3c Type:ContainerDied Data:a6052f1b7df0ce819ade738c4fe7007e897ceb3c4414d6d162a7be9f65ad95fa}
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.847674    3995 kubelet.go:2142] "SyncLoop (PLEG): event for pod" pod="platform/efs-csi-node-cmtlw" event=&{ID:284bce63-1b1a-4bda-b538-1996dce42e3c Type:ContainerDied Data:51895e69f6b3bc16cf73f94c67083e8e8b8f273f856d3122a49b71436124bf8f}
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.847684    3995 kubelet.go:2142] "SyncLoop (PLEG): event for pod" pod="platform/efs-csi-node-cmtlw" event=&{ID:284bce63-1b1a-4bda-b538-1996dce42e3c Type:ContainerDied Data:f467b2d712a83b32044dffa37b1c6fe3cba42b063f49a7ac194693f6b8338b49}
Apr 28 11:11:56 ip-10-148-18-78 containerd: time="2023-04-28T11:11:56.865787112Z" level=error msg="copy shim log" error="read /proc/self/fd/54: file already closed"
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.959098    3995 csi_plugin.go:178] kubernetes.io/csi: registrationHandler.DeRegisterPlugin request for plugin efs.csi.aws.com
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.959119    3995 reconciler.go:143] "OperationExecutor.UnregisterPlugin started" plugin={SocketPath:/var/lib/kubelet/plugins_registry/efs.csi.aws.com-reg.sock Timestamp:2023-04-28 10:09:44.709700236 +0000 UTC m=+5544.681246461 Handler:0x77933e0 Name:efs.csi.aws.com}
Apr 28 11:11:56 ip-10-148-18-78 kubelet: I0428 11:11:56.980209    3995 kubelet.go:2120] "SyncLoop DELETE" source="api" pods=[platform/efs-csi-node-cmtlw]
Apr 28 11:11:57 ip-10-148-18-78 containerd: 2023-04-28 11:11:56.984 [INFO][5944] k8s.go 564: Cleaning up netns ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1"
Apr 28 11:11:57 ip-10-148-18-78 containerd: 2023-04-28 11:11:56.985 [INFO][5944] dataplane_linux.go 495: Calico CNI deleting device in netns /var/run/netns/cni-4a371a77-20c9-bcde-1be7-ed49cdf90c29 ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1"
Apr 28 11:11:57 ip-10-148-18-78 containerd: 2023-04-28 11:11:57.019 [INFO][5944] dataplane_linux.go 512: Calico CNI deleted device in netns /var/run/netns/cni-4a371a77-20c9-bcde-1be7-ed49cdf90c29 ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1"
Apr 28 11:11:57 ip-10-148-18-78 containerd: 2023-04-28 11:11:57.019 [INFO][5944] k8s.go 571: Releasing IP address(es) ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1"
Apr 28 11:11:57 ip-10-148-18-78 containerd: 2023-04-28 11:11:57.019 [INFO][5944] utils.go 196: Calico CNI releasing IP address ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1"
Apr 28 11:11:57 ip-10-148-18-78 containerd: 2023-04-28 11:11:57.044 [INFO][5970] ipam_plugin.go 414: Releasing address using handleID ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1" HandleID="k8s-pod-network.c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1" Workload="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--cmtlw-eth0"
Apr 28 11:11:57 ip-10-148-18-78 containerd: time="2023-04-28T11:11:57Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:355"
Apr 28 11:11:57 ip-10-148-18-78 containerd: time="2023-04-28T11:11:57Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:370"
Apr 28 11:11:57 ip-10-148-18-78 containerd: 2023-04-28 11:11:57.122 [INFO][5970] ipam_plugin.go 433: Released address using handleID ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1" HandleID="k8s-pod-network.c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1" Workload="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--cmtlw-eth0"
Apr 28 11:11:57 ip-10-148-18-78 containerd: 2023-04-28 11:11:57.123 [INFO][5970] ipam_plugin.go 442: Releasing address using workloadID ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1" HandleID="k8s-pod-network.c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1" Workload="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--cmtlw-eth0"
Apr 28 11:11:57 ip-10-148-18-78 containerd: time="2023-04-28T11:11:57Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:376"
Apr 28 11:11:57 ip-10-148-18-78 containerd: 2023-04-28 11:11:57.129 [INFO][5944] k8s.go 577: Teardown processing complete. ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1"
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220537    3995 reconciler.go:194] "operationExecutor.UnmountVolume started for volume \"efs-utils-config\" (UniqueName: \"kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-efs-utils-config\") pod \"284bce63-1b1a-4bda-b538-1996dce42e3c\" (UID: \"284bce63-1b1a-4bda-b538-1996dce42e3c\") "
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220579    3995 reconciler.go:194] "operationExecutor.UnmountVolume started for volume \"efs-state-dir\" (UniqueName: \"kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-efs-state-dir\") pod \"284bce63-1b1a-4bda-b538-1996dce42e3c\" (UID: \"284bce63-1b1a-4bda-b538-1996dce42e3c\") "
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220601    3995 reconciler.go:194] "operationExecutor.UnmountVolume started for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-plugin-dir\") pod \"284bce63-1b1a-4bda-b538-1996dce42e3c\" (UID: \"284bce63-1b1a-4bda-b538-1996dce42e3c\") "
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220632    3995 reconciler.go:194] "operationExecutor.UnmountVolume started for volume \"efs-utils-config-legacy\" (UniqueName: \"kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-efs-utils-config-legacy\") pod \"284bce63-1b1a-4bda-b538-1996dce42e3c\" (UID: \"284bce63-1b1a-4bda-b538-1996dce42e3c\") "
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220668    3995 reconciler.go:194] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5tkd\" (UniqueName: \"kubernetes.io/projected/284bce63-1b1a-4bda-b538-1996dce42e3c-kube-api-access-t5tkd\") pod \"284bce63-1b1a-4bda-b538-1996dce42e3c\" (UID: \"284bce63-1b1a-4bda-b538-1996dce42e3c\") "
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220671    3995 operation_generator.go:868] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-efs-utils-config" (OuterVolumeSpecName: "efs-utils-config") pod "284bce63-1b1a-4bda-b538-1996dce42e3c" (UID: "284bce63-1b1a-4bda-b538-1996dce42e3c"). InnerVolumeSpecName "efs-utils-config". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220669    3995 operation_generator.go:868] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-efs-state-dir" (OuterVolumeSpecName: "efs-state-dir") pod "284bce63-1b1a-4bda-b538-1996dce42e3c" (UID: "284bce63-1b1a-4bda-b538-1996dce42e3c"). InnerVolumeSpecName "efs-state-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220668    3995 operation_generator.go:868] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-plugin-dir" (OuterVolumeSpecName: "plugin-dir") pod "284bce63-1b1a-4bda-b538-1996dce42e3c" (UID: "284bce63-1b1a-4bda-b538-1996dce42e3c"). InnerVolumeSpecName "plugin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220705    3995 operation_generator.go:868] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "284bce63-1b1a-4bda-b538-1996dce42e3c" (UID: "284bce63-1b1a-4bda-b538-1996dce42e3c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220688    3995 reconciler.go:194] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-kubelet-dir\") pod \"284bce63-1b1a-4bda-b538-1996dce42e3c\" (UID: \"284bce63-1b1a-4bda-b538-1996dce42e3c\") "
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220731    3995 operation_generator.go:868] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-efs-utils-config-legacy" (OuterVolumeSpecName: "efs-utils-config-legacy") pod "284bce63-1b1a-4bda-b538-1996dce42e3c" (UID: "284bce63-1b1a-4bda-b538-1996dce42e3c"). InnerVolumeSpecName "efs-utils-config-legacy". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220764    3995 operation_generator.go:868] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-registration-dir" (OuterVolumeSpecName: "registration-dir") pod "284bce63-1b1a-4bda-b538-1996dce42e3c" (UID: "284bce63-1b1a-4bda-b538-1996dce42e3c"). InnerVolumeSpecName "registration-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220772    3995 reconciler.go:194] "operationExecutor.UnmountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-registration-dir\") pod \"284bce63-1b1a-4bda-b538-1996dce42e3c\" (UID: \"284bce63-1b1a-4bda-b538-1996dce42e3c\") "
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220845    3995 reconciler.go:312] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-kubelet-dir\") on node \"ip-10-148-18-78.eu-west-1.compute.internal\" DevicePath \"\""
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220861    3995 reconciler.go:312] "Volume detached for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-registration-dir\") on node \"ip-10-148-18-78.eu-west-1.compute.internal\" DevicePath \"\""
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220873    3995 reconciler.go:312] "Volume detached for volume \"efs-utils-config\" (UniqueName: \"kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-efs-utils-config\") on node \"ip-10-148-18-78.eu-west-1.compute.internal\" DevicePath \"\""
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220886    3995 reconciler.go:312] "Volume detached for volume \"efs-state-dir\" (UniqueName: \"kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-efs-state-dir\") on node \"ip-10-148-18-78.eu-west-1.compute.internal\" DevicePath \"\""
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220904    3995 reconciler.go:312] "Volume detached for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-plugin-dir\") on node \"ip-10-148-18-78.eu-west-1.compute.internal\" DevicePath \"\""
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.220919    3995 reconciler.go:312] "Volume detached for volume \"efs-utils-config-legacy\" (UniqueName: \"kubernetes.io/host-path/284bce63-1b1a-4bda-b538-1996dce42e3c-efs-utils-config-legacy\") on node \"ip-10-148-18-78.eu-west-1.compute.internal\" DevicePath \"\""
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.222025    3995 operation_generator.go:868] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284bce63-1b1a-4bda-b538-1996dce42e3c-kube-api-access-t5tkd" (OuterVolumeSpecName: "kube-api-access-t5tkd") pod "284bce63-1b1a-4bda-b538-1996dce42e3c" (UID: "284bce63-1b1a-4bda-b538-1996dce42e3c"). InnerVolumeSpecName "kube-api-access-t5tkd". PluginName "kubernetes.io/projected", VolumeGidValue ""
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.321505    3995 reconciler.go:312] "Volume detached for volume \"kube-api-access-t5tkd\" (UniqueName: \"kubernetes.io/projected/284bce63-1b1a-4bda-b538-1996dce42e3c-kube-api-access-t5tkd\") on node \"ip-10-148-18-78.eu-west-1.compute.internal\" DevicePath \"\""
Apr 28 11:11:57 ip-10-148-18-78 systemd: Removed slice libcontainer container kubepods-burstable-pod284bce63_1b1a_4bda_b538_1996dce42e3c.slice.
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.850810    3995 kubelet.go:2142] "SyncLoop (PLEG): event for pod" pod="platform/efs-csi-node-cmtlw" event=&{ID:284bce63-1b1a-4bda-b538-1996dce42e3c Type:ContainerDied Data:c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1}
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.850856    3995 scope.go:110] "RemoveContainer" containerID="f467b2d712a83b32044dffa37b1c6fe3cba42b063f49a7ac194693f6b8338b49"
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.856017    3995 scope.go:110] "RemoveContainer" containerID="51895e69f6b3bc16cf73f94c67083e8e8b8f273f856d3122a49b71436124bf8f"
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.861721    3995 scope.go:110] "RemoveContainer" containerID="a6052f1b7df0ce819ade738c4fe7007e897ceb3c4414d6d162a7be9f65ad95fa"
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.913150    3995 kubelet.go:2120] "SyncLoop DELETE" source="api" pods=[platform/efs-csi-node-cmtlw]
Apr 28 11:11:57 ip-10-148-18-78 kubelet: I0428 11:11:57.918147    3995 kubelet.go:2114] "SyncLoop REMOVE" source="api" pods=[platform/efs-csi-node-cmtlw]
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.017130    3995 kubelet.go:2104] "SyncLoop ADD" source="api" pods=[platform/efs-csi-node-zd4ws]
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.017182    3995 topology_manager.go:200] "Topology Admit Handler"
Apr 28 11:11:58 ip-10-148-18-78 kubelet: E0428 11:11:58.017274    3995 cpu_manager.go:390] "RemoveStaleState: removing container" podUID="284bce63-1b1a-4bda-b538-1996dce42e3c" containerName="csi-driver-registrar"
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.017290    3995 state_mem.go:107] "Deleted CPUSet assignment" podUID="284bce63-1b1a-4bda-b538-1996dce42e3c" containerName="csi-driver-registrar"
Apr 28 11:11:58 ip-10-148-18-78 kubelet: E0428 11:11:58.017314    3995 cpu_manager.go:390] "RemoveStaleState: removing container" podUID="284bce63-1b1a-4bda-b538-1996dce42e3c" containerName="liveness-probe"
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.017322    3995 state_mem.go:107] "Deleted CPUSet assignment" podUID="284bce63-1b1a-4bda-b538-1996dce42e3c" containerName="liveness-probe"
Apr 28 11:11:58 ip-10-148-18-78 kubelet: E0428 11:11:58.017331    3995 cpu_manager.go:390] "RemoveStaleState: removing container" podUID="284bce63-1b1a-4bda-b538-1996dce42e3c" containerName="efs-plugin"
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.017338    3995 state_mem.go:107] "Deleted CPUSet assignment" podUID="284bce63-1b1a-4bda-b538-1996dce42e3c" containerName="efs-plugin"
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.017376    3995 memory_manager.go:345] "RemoveStaleState removing state" podUID="284bce63-1b1a-4bda-b538-1996dce42e3c" containerName="liveness-probe"
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.017384    3995 memory_manager.go:345] "RemoveStaleState removing state" podUID="284bce63-1b1a-4bda-b538-1996dce42e3c" containerName="efs-plugin"
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.017391    3995 memory_manager.go:345] "RemoveStaleState removing state" podUID="284bce63-1b1a-4bda-b538-1996dce42e3c" containerName="csi-driver-registrar"
Apr 28 11:11:58 ip-10-148-18-78 systemd: Created slice libcontainer container kubepods-burstable-poddaa12c33_9575_44e2_b4ba_28479fa3f23f.slice.
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.127124    3995 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-plugin-dir\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.127173    3995 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"efs-utils-config\" (UniqueName: \"kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-efs-utils-config\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.127195    3995 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-registration-dir\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.127304    3995 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-kubelet-dir\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.127370    3995 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfx77\" (UniqueName: \"kubernetes.io/projected/daa12c33-9575-44e2-b4ba-28479fa3f23f-kube-api-access-tfx77\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.127409    3995 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"efs-utils-config-legacy\" (UniqueName: \"kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-efs-utils-config-legacy\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.127441    3995 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"efs-state-dir\" (UniqueName: \"kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-efs-state-dir\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.228203    3995 reconciler.go:266] "operationExecutor.MountVolume started for volume \"kube-api-access-tfx77\" (UniqueName: \"kubernetes.io/projected/daa12c33-9575-44e2-b4ba-28479fa3f23f-kube-api-access-tfx77\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.228257    3995 reconciler.go:266] "operationExecutor.MountVolume started for volume \"efs-utils-config-legacy\" (UniqueName: \"kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-efs-utils-config-legacy\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.228284    3995 reconciler.go:266] "operationExecutor.MountVolume started for volume \"efs-state-dir\" (UniqueName: \"kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-efs-state-dir\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.228357    3995 reconciler.go:266] "operationExecutor.MountVolume started for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-plugin-dir\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.228359    3995 operation_generator.go:714] MountVolume.SetUp succeeded for volume "efs-utils-config-legacy" (UniqueName: "kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-efs-utils-config-legacy") pod "efs-csi-node-zd4ws" (UID: "daa12c33-9575-44e2-b4ba-28479fa3f23f")
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.228385    3995 reconciler.go:266] "operationExecutor.MountVolume started for volume \"efs-utils-config\" (UniqueName: \"kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-efs-utils-config\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.228385    3995 operation_generator.go:714] MountVolume.SetUp succeeded for volume "efs-state-dir" (UniqueName: "kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-efs-state-dir") pod "efs-csi-node-zd4ws" (UID: "daa12c33-9575-44e2-b4ba-28479fa3f23f")
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.228419    3995 reconciler.go:266] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-kubelet-dir\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.228423    3995 operation_generator.go:714] MountVolume.SetUp succeeded for volume "plugin-dir" (UniqueName: "kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-plugin-dir") pod "efs-csi-node-zd4ws" (UID: "daa12c33-9575-44e2-b4ba-28479fa3f23f")
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.228446    3995 operation_generator.go:714] MountVolume.SetUp succeeded for volume "efs-utils-config" (UniqueName: "kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-efs-utils-config") pod "efs-csi-node-zd4ws" (UID: "daa12c33-9575-44e2-b4ba-28479fa3f23f")
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.228449    3995 reconciler.go:266] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-registration-dir\") pod \"efs-csi-node-zd4ws\" (UID: \"daa12c33-9575-44e2-b4ba-28479fa3f23f\") "
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.228455    3995 operation_generator.go:714] MountVolume.SetUp succeeded for volume "kubelet-dir" (UniqueName: "kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-kubelet-dir") pod "efs-csi-node-zd4ws" (UID: "daa12c33-9575-44e2-b4ba-28479fa3f23f")
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.228479    3995 operation_generator.go:714] MountVolume.SetUp succeeded for volume "registration-dir" (UniqueName: "kubernetes.io/host-path/daa12c33-9575-44e2-b4ba-28479fa3f23f-registration-dir") pod "efs-csi-node-zd4ws" (UID: "daa12c33-9575-44e2-b4ba-28479fa3f23f")
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.240962    3995 operation_generator.go:714] MountVolume.SetUp succeeded for volume "kube-api-access-tfx77" (UniqueName: "kubernetes.io/projected/daa12c33-9575-44e2-b4ba-28479fa3f23f-kube-api-access-tfx77") pod "efs-csi-node-zd4ws" (UID: "daa12c33-9575-44e2-b4ba-28479fa3f23f")
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.341420    3995 kuberuntime_manager.go:484] "No sandbox for pod can be found. Need to start a new one" pod="platform/efs-csi-node-zd4ws"
Apr 28 11:11:58 ip-10-148-18-78 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Apr 28 11:11:58 ip-10-148-18-78 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali90003f9bd47: link becomes ready
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.393 [INFO][5997] plugin.go 261: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-eth0 efs-csi-node- platform  daa12c33-9575-44e2-b4ba-28479fa3f23f 99867883 0 2023-04-28 11:11:57 +0000 UTC <nil> <nil> map[app:efs-csi-node app.kubernetes.io/instance:kustomize app.kubernetes.io/name:aws-efs-csi-driver controller-revision-hash:6fcbf866f8 owner:sre-infrastructure service.name:aws-efs-csi-driver pod-template-generation:42 projectcalico.org/namespace:platform projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:efs-csi-node-sa] map[] [] []  []} {k8s  ip-10-148-18-78.eu-west-1.compute.internal  efs-csi-node-zd4ws eth0 efs-csi-node-sa [] []   [kns.platform ksa.platform.efs-csi-node-sa] cali90003f9bd47  [{healthz TCP 9809}]}} ContainerID="354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" Namespace="platform" Pod="efs-csi-node-zd4ws" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.393 [INFO][5997] k8s.go 72: Extracted identifiers for CmdAddK8s ContainerID="354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" Namespace="platform" Pod="efs-csi-node-zd4ws" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-eth0"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.429 [INFO][6009] ipam_plugin.go 226: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" HandleID="k8s-pod-network.354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" Workload="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-eth0"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.447 [INFO][6009] ipam_plugin.go 265: Auto assigning IP ContainerID="354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" HandleID="k8s-pod-network.354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" Workload="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00044da90), Attrs:map[string]string{"namespace":"platform", "node":"ip-10-148-18-78.eu-west-1.compute.internal", "pod":"efs-csi-node-zd4ws", "timestamp":"2023-04-28 11:11:58.42946188 +0000 UTC"}, Hostname:"ip-10-148-18-78.eu-west-1.compute.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil)}
Apr 28 11:11:58 ip-10-148-18-78 containerd: time="2023-04-28T11:11:58Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:355"
Apr 28 11:11:58 ip-10-148-18-78 containerd: time="2023-04-28T11:11:58Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:370"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.447 [INFO][6009] ipam.go 97: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-10-148-18-78.eu-west-1.compute.internal'
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.447 [INFO][6009] ipam.go 596: Looking up existing affinities for host handle="k8s-pod-network.354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.461 [INFO][6009] ipam.go 358: Looking up existing affinities for host host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.484 [INFO][6009] ipam.go 441: Trying affinity for 100.117.213.192/26 host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.488 [INFO][6009] ipam.go 145: Attempting to load block cidr=100.117.213.192/26 host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.494 [INFO][6009] ipam.go 222: Affinity is confirmed and block has been loaded cidr=100.117.213.192/26 host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.494 [INFO][6009] ipam.go 1087: Attempting to assign 1 addresses from block block=100.117.213.192/26 handle="k8s-pod-network.354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.498 [INFO][6009] ipam.go 1564: Creating new handle: k8s-pod-network.354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.506 [INFO][6009] ipam.go 1110: Writing block in order to claim IPs block=100.117.213.192/26 handle="k8s-pod-network.354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.517 [INFO][6009] ipam.go 1123: Successfully claimed IPs: [100.117.213.205/26] block=100.117.213.192/26 handle="k8s-pod-network.354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.517 [INFO][6009] ipam.go 777: Auto-assigned 1 out of 1 IPv4s: [100.117.213.205/26] handle="k8s-pod-network.354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" host="ip-10-148-18-78.eu-west-1.compute.internal"
Apr 28 11:11:58 ip-10-148-18-78 containerd: time="2023-04-28T11:11:58Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:376"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.517 [INFO][6009] ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[100.117.213.205/26] IPv6=[] ContainerID="354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" HandleID="k8s-pod-network.354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" Workload="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-eth0"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.519 [INFO][5997] k8s.go 375: Populated endpoint ContainerID="354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" Namespace="platform" Pod="efs-csi-node-zd4ws" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-eth0", GenerateName:"efs-csi-node-", Namespace:"platform", SelfLink:"", UID:"daa12c33-9575-44e2-b4ba-28479fa3f23f", ResourceVersion:"99867883", Generation:0, CreationTimestamp:time.Date(2023, time.April, 28, 11, 11, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"efs-csi-node", "app.kubernetes.io/instance":"kustomize", "app.kubernetes.io/name":"aws-efs-csi-driver", "controller-revision-hash":"6fcbf866f8", "owner":"sre-infrastructure", "service.name":"aws-efs-csi-driver", "pod-template-generation":"42", "projectcalico.org/namespace":"platform", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"efs-csi-node-sa"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-10-148-18-78.eu-west-1.compute.internal", ContainerID:"", Pod:"efs-csi-node-zd4ws", Endpoint:"eth0", ServiceAccountName:"efs-csi-node-sa", IPNetworks:[]string{"100.117.213.205/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.platform", "ksa.platform.efs-csi-node-sa"}, InterfaceName:"cali90003f9bd47", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"healthz", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2651}}}}
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.519 [INFO][5997] k8s.go 376: Calico CNI using IPs: [100.117.213.205/32] ContainerID="354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" Namespace="platform" Pod="efs-csi-node-zd4ws" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-eth0"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.519 [INFO][5997] dataplane_linux.go 66: Setting the host side veth name to cali90003f9bd47 ContainerID="354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" Namespace="platform" Pod="efs-csi-node-zd4ws" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-eth0"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.530 [INFO][5997] dataplane_linux.go 442: Disabling IPv4 forwarding ContainerID="354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" Namespace="platform" Pod="efs-csi-node-zd4ws" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-eth0"
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.563 [INFO][5997] k8s.go 403: Added Mac, interface name, and active container ID to endpoint ContainerID="354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" Namespace="platform" Pod="efs-csi-node-zd4ws" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-eth0", GenerateName:"efs-csi-node-", Namespace:"platform", SelfLink:"", UID:"daa12c33-9575-44e2-b4ba-28479fa3f23f", ResourceVersion:"99867883", Generation:0, CreationTimestamp:time.Date(2023, time.April, 28, 11, 11, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"efs-csi-node", "app.kubernetes.io/instance":"kustomize", "app.kubernetes.io/name":"aws-efs-csi-driver", "controller-revision-hash":"6fcbf866f8", "owner":"sre-infrastructure", "service.name":"aws-efs-csi-driver", "pod-template-generation":"42", "projectcalico.org/namespace":"platform", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"efs-csi-node-sa"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-10-148-18-78.eu-west-1.compute.internal", ContainerID:"354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d", Pod:"efs-csi-node-zd4ws", Endpoint:"eth0", ServiceAccountName:"efs-csi-node-sa", IPNetworks:[]string{"100.117.213.205/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.platform", "ksa.platform.efs-csi-node-sa"}, InterfaceName:"cali90003f9bd47", MAC:"36:b8:88:21:82:e2", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"healthz", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2651}}}}
Apr 28 11:11:58 ip-10-148-18-78 containerd: 2023-04-28 11:11:58.578 [INFO][5997] k8s.go 477: Wrote updated endpoint to datastore ContainerID="354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d" Namespace="platform" Pod="efs-csi-node-zd4ws" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--zd4ws-eth0"
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.582891    3995 kubelet.go:2111] "SyncLoop UPDATE" source="api" pods=[platform/efs-csi-node-zd4ws]
Apr 28 11:11:58 ip-10-148-18-78 containerd: time="2023-04-28T11:11:58.599682124Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d pid=6051
Apr 28 11:11:58 ip-10-148-18-78 systemd: Started libcontainer container 354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d.
Apr 28 11:11:58 ip-10-148-18-78 systemd: Couldn't stat device /dev/char/10:200
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.696848    3995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
Apr 28 11:11:58 ip-10-148-18-78 systemd: Started libcontainer container 8ba54c690e213d4f86ee2b2d1d65aa0bf23aa7fbfc32032887f3418ebc03e07b.
Apr 28 11:11:58 ip-10-148-18-78 kubelet: I0428 11:11:58.853858    3995 kubelet.go:2142] "SyncLoop (PLEG): event for pod" pod="platform/efs-csi-node-zd4ws" event=&{ID:daa12c33-9575-44e2-b4ba-28479fa3f23f Type:ContainerStarted Data:354c36e1b1ab31d204f0811dd262b759fb3277c2ddc8a14c97d57f5fb9624b7d}
Apr 28 11:11:58 ip-10-148-18-78 systemd: Started libcontainer container d116454b06933736e9fe8719f1a984821a101ac0740a5dc25a85ea4ad5f1c7a3.
Apr 28 11:11:58 ip-10-148-18-78 systemd: Couldn't stat device /dev/char/10:200
Apr 28 11:11:59 ip-10-148-18-78 kubelet: I0428 11:11:59.007097    3995 plugin_watcher.go:203] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/efs.csi.aws.com-reg.sock"
Apr 28 11:11:59 ip-10-148-18-78 systemd: Started libcontainer container 30783a8318f7abfdf89fc6056207b9c32ab92dd92677193dc22504a6aa48e485.
Apr 28 11:11:59 ip-10-148-18-78 systemd: Couldn't stat device /dev/char/10:200
Apr 28 11:11:59 ip-10-148-18-78 kubelet: I0428 11:11:59.351150    3995 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=284bce63-1b1a-4bda-b538-1996dce42e3c path="/var/lib/kubelet/pods/284bce63-1b1a-4bda-b538-1996dce42e3c/volumes"
Apr 28 11:11:59 ip-10-148-18-78 kubelet: I0428 11:11:59.857876    3995 kubelet.go:2142] "SyncLoop (PLEG): event for pod" pod="platform/efs-csi-node-zd4ws" event=&{ID:daa12c33-9575-44e2-b4ba-28479fa3f23f Type:ContainerStarted Data:8ba54c690e213d4f86ee2b2d1d65aa0bf23aa7fbfc32032887f3418ebc03e07b}
Apr 28 11:11:59 ip-10-148-18-78 kubelet: I0428 11:11:59.857915    3995 kubelet.go:2142] "SyncLoop (PLEG): event for pod" pod="platform/efs-csi-node-zd4ws" event=&{ID:daa12c33-9575-44e2-b4ba-28479fa3f23f Type:ContainerStarted Data:30783a8318f7abfdf89fc6056207b9c32ab92dd92677193dc22504a6aa48e485}
Apr 28 11:11:59 ip-10-148-18-78 kubelet: I0428 11:11:59.857926    3995 kubelet.go:2142] "SyncLoop (PLEG): event for pod" pod="platform/efs-csi-node-zd4ws" event=&{ID:daa12c33-9575-44e2-b4ba-28479fa3f23f Type:ContainerStarted Data:d116454b06933736e9fe8719f1a984821a101ac0740a5dc25a85ea4ad5f1c7a3}
Apr 28 11:11:59 ip-10-148-18-78 kubelet: I0428 11:11:59.961186    3995 reconciler.go:160] "OperationExecutor.RegisterPlugin started" plugin={SocketPath:/var/lib/kubelet/plugins_registry/efs.csi.aws.com-reg.sock Timestamp:2023-04-28 11:11:59.00711691 +0000 UTC m=+9278.978663125 Handler:<nil> Name:}
Apr 28 11:11:59 ip-10-148-18-78 kubelet: I0428 11:11:59.962526    3995 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: efs.csi.aws.com endpoint: /var/lib/kubelet/plugins/efs.csi.aws.com/csi.sock versions: 1.0.0
Apr 28 11:11:59 ip-10-148-18-78 kubelet: I0428 11:11:59.962560    3995 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: efs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/efs.csi.aws.com/csi.sock
Apr 28 11:12:00 ip-10-148-18-78 kubelet: E0428 11:12:00.006624    3995 nodeinfomanager.go:566] Invalid attach limit value 0 cannot be added to CSINode object for "efs.csi.aws.com"
Apr 28 11:12:07 ip-10-148-18-78 kubelet: I0428 11:12:07.850581    3995 logs.go:319] "Finished parsing log file" path="/var/log/pods/default_test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4_798b93e9-9aad-467e-a287-b984fe158e49/test-nginx-smutje-1fa1a4c3/0.log"
Apr 28 11:12:07 ip-10-148-18-78 kubelet: I0428 11:12:07.874314    3995 logs.go:319] "Finished parsing log file" path="/var/log/pods/default_test-nginx-smutje-1fa1a4c3-6d6fc845dd-b9cb4_798b93e9-9aad-467e-a287-b984fe158e49/default-init-container/0.log"
Apr 28 11:12:25 ip-10-148-18-78 kubelet: I0428 11:12:25.363290    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:12:26 ip-10-148-18-78 containerd: 2023-04-28 11:12:25.955 [WARNING][6480] k8s.go 522: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1" WorkloadEndpoint="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--cmtlw-eth0"
Apr 28 11:12:26 ip-10-148-18-78 containerd: 2023-04-28 11:12:25.955 [INFO][6480] k8s.go 564: Cleaning up netns ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1"
Apr 28 11:12:26 ip-10-148-18-78 containerd: 2023-04-28 11:12:25.955 [INFO][6480] k8s.go 571: Releasing IP address(es) ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1"
Apr 28 11:12:26 ip-10-148-18-78 containerd: 2023-04-28 11:12:25.955 [INFO][6480] utils.go 196: Calico CNI releasing IP address ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1"
Apr 28 11:12:26 ip-10-148-18-78 containerd: 2023-04-28 11:12:25.979 [INFO][6492] ipam_plugin.go 414: Releasing address using handleID ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1" HandleID="k8s-pod-network.c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1" Workload="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--cmtlw-eth0"
Apr 28 11:12:26 ip-10-148-18-78 containerd: time="2023-04-28T11:12:25Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:355"
Apr 28 11:12:26 ip-10-148-18-78 containerd: time="2023-04-28T11:12:25Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:370"
Apr 28 11:12:26 ip-10-148-18-78 containerd: 2023-04-28 11:12:25.994 [WARNING][6492] ipam_plugin.go 431: Asked to release address but it doesn't exist. Ignoring ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1" HandleID="k8s-pod-network.c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1" Workload="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--cmtlw-eth0"
Apr 28 11:12:26 ip-10-148-18-78 containerd: 2023-04-28 11:12:25.994 [INFO][6492] ipam_plugin.go 442: Releasing address using workloadID ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1" HandleID="k8s-pod-network.c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1" Workload="ip--10--148--18--78.eu--west--1.compute.internal-k8s-efs--csi--node--cmtlw-eth0"
Apr 28 11:12:26 ip-10-148-18-78 containerd: time="2023-04-28T11:12:25Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:376"
Apr 28 11:12:26 ip-10-148-18-78 containerd: 2023-04-28 11:12:26.000 [INFO][6480] k8s.go 577: Teardown processing complete. ContainerID="c259066a8c0905e1440f58bd342fa80f820cf79ad10a63d7db39bc3ec9406cf1"
Apr 28 11:13:25 ip-10-148-18-78 kubelet: I0428 11:13:25.364271    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:13:43 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 123770ms.
Apr 28 11:14:25 ip-10-148-18-78 kubelet: I0428 11:14:25.365336    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:14:53 ip-10-148-18-78 amazon-ssm-agent: 2023-04-28 11:14:53 INFO EC2RoleProvider Successfully connected with instance profile role credentials
Apr 28 11:14:53 ip-10-148-18-78 amazon-ssm-agent: 2023-04-28 11:14:53 INFO [CredentialRefresher] Next credential rotation will be in 31.14999692125 minutes
Apr 28 11:14:59 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:15:25 ip-10-148-18-78 kubelet: I0428 11:15:25.365451    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:15:47 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 116500ms.
Apr 28 11:16:25 ip-10-148-18-78 kubelet: I0428 11:16:25.366262    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:17:25 ip-10-148-18-78 kubelet: I0428 11:17:25.367050    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:17:44 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 129810ms.
Apr 28 11:17:59 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:18:02 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, still trying
Apr 28 11:18:04 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:18:25 ip-10-148-18-78 kubelet: I0428 11:18:25.368219    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:19:25 ip-10-148-18-78 kubelet: I0428 11:19:25.369323    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:19:54 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 126850ms.
Apr 28 11:20:01 ip-10-148-18-78 systemd: Created slice User Slice of root.
Apr 28 11:20:01 ip-10-148-18-78 systemd: Started Session 20 of user root.
Apr 28 11:20:01 ip-10-148-18-78 systemd: Removed slice User Slice of root.
Apr 28 11:20:25 ip-10-148-18-78 kubelet: I0428 11:20:25.369597    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:21:00 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:21:10 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:21:25 ip-10-148-18-78 kubelet: I0428 11:21:25.370422    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:22:01 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 121560ms.
Apr 28 11:22:25 ip-10-148-18-78 kubelet: I0428 11:22:25.371596    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:23:25 ip-10-148-18-78 kubelet: I0428 11:23:25.372271    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:24:00 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:24:02 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 119340ms.
Apr 28 11:24:05 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:24:15 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:24:25 ip-10-148-18-78 kubelet: I0428 11:24:25.373322    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:25:01 ip-10-148-18-78 systemd: Created slice User Slice of adeel.chaudhry.
Apr 28 11:25:01 ip-10-148-18-78 systemd-logind: New session 21 of user adeel.chaudhry.
Apr 28 11:25:01 ip-10-148-18-78 systemd: Started Session 21 of user adeel.chaudhry.
Apr 28 11:25:01 ip-10-148-18-78 oddjobd: Error org.freedesktop.DBus.Error.SELinuxSecurityContextUnknown: Could not determine security context for ':1.352'.
Apr 28 11:25:25 ip-10-148-18-78 kubelet: I0428 11:25:25.374338    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:26:02 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 119040ms.
Apr 28 11:26:25 ip-10-148-18-78 kubelet: I0428 11:26:25.374841    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:27:00 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:27:05 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:27:10 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:27:20 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:27:25 ip-10-148-18-78 kubelet: I0428 11:27:25.375342    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:28:01 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 109380ms.
Apr 28 11:28:25 ip-10-148-18-78 kubelet: I0428 11:28:25.376286    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:29:25 ip-10-148-18-78 kubelet: I0428 11:29:25.377449    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:29:50 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 125080ms.
Apr 28 11:30:00 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:30:01 ip-10-148-18-78 systemd: Created slice User Slice of root.
Apr 28 11:30:01 ip-10-148-18-78 systemd: Started Session 22 of user root.
Apr 28 11:30:01 ip-10-148-18-78 systemd: Removed slice User Slice of root.
Apr 28 11:30:05 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:30:10 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:30:16 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:30:25 ip-10-148-18-78 kubelet: I0428 11:30:25.378951    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:30:26 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:31:09 ip-10-148-18-78 kubelet: I0428 11:31:09.825055    3995 logs.go:319] "Finished parsing log file" path="/var/log/pods/platform_efs-csi-node-zd4ws_daa12c33-9575-44e2-b4ba-28479fa3f23f/efs-plugin/0.log"
Apr 28 11:31:25 ip-10-148-18-78 kubelet: I0428 11:31:25.380200    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:31:55 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 131000ms.
Apr 28 11:32:25 ip-10-148-18-78 kubelet: I0428 11:32:25.380960    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:33:00 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:33:11 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:33:16 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:33:21 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:33:25 ip-10-148-18-78 kubelet: I0428 11:33:25.381655    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:33:31 ip-10-148-18-78 kernel: nfs: server 127.0.0.1 not responding, timed out
Apr 28 11:34:07 ip-10-148-18-78 dhclient[2556]: XMT: Solicit on eth0, interval 128870ms.
Apr 28 11:34:25 ip-10-148-18-78 kubelet: I0428 11:34:25.382147    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
Apr 28 11:35:25 ip-10-148-18-78 kubelet: I0428 11:35:25.382850    3995 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-proxy-ip-10-148-18-78.eu-west-1.compute.internal" status=Running
adeelahmadch commented 1 year ago

@Ashley-wenyizha were you able to re-produce this issue? We are seeing this in all of environments and currently we don't suggest our developers to use efs for production work load.

adammw commented 1 year ago

@adeelahmadch these logs -

2023-04-28 11:14:45 UTC - INFO - Starting TLS tunnel: "/usr/bin/stunnel5 /var/run/efs/stunnel-config.fs-0e53****.var.lib.kubelet.pods.798b93e9-9aad-467e-a287-b984fe158e49.volumes.kubernetes.io~csi.pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4.mount.20173"
2023-04-28 11:14:45 UTC - INFO - Started TLS tunnel, pid: 163
2023-04-28 11:14:45 UTC - WARNING - Child TLS tunnel process 22 has exited, returncode=-9
2023-04-28 11:19:46 UTC - INFO - Sending signal SIGKILL(9) to stunnel. PID: 163, group ID: 163
2023-04-28 11:19:46 UTC - WARNING - Connection timeout for /var/lib/kubelet/pods/798b93e9-9aad-467e-a287-b984fe158e49/volumes/kubernetes.io~csi/pvc-63ff5d53-2da5-4f63-b0e2-92996e0387a4/mount after 30 sec, SIGKILL has been sent to the potential unhealthy stunnel 163, restarting a new stunnel process.

Indicates the stunnel process is not starting the tunnel correctly, so really need the stunnel debug logs to know for sure what's going on. You'll need to exec in to modify the debug setting as mentioned in https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/troubleshooting#enable-debug-logging-for-efs-utils

adammw commented 1 year ago

We are also seeing this same behaviour where restarts of the daemonset break the mount but with different log output so might be a different issue...

My hunch is that the use of SO_REUSEADDR on the stunnel connection means that the new restarted process isn't getting the socket opened and all connection attempts are getting dropped. But strangely we don't see the mount-watchdog respond to that at all..

brianroco commented 1 year ago

We also encountered this issue and we are wondering if we can atleast make the pod log /var/log/amazon/efs/mount-watchdog.log made available in the host? We did our setup via the helm-chart.

yyyoungha commented 1 year ago

I had the same issue on driver version v1.6.0 and EKS 1.27 cluster. I was able to resolve the issue by updating to the self-managed driver v1.7.0 or installing the EKS managed add-on v1.5.8. However, I still couldn't find what was the root cause.

k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 7 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/aws-efs-csi-driver/issues/973#issuecomment-2026398185): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.