projectcalico / calico

Cloud native networking and network security
https://docs.tigera.io/calico/latest/about/
Apache License 2.0
5.89k stars 1.31k forks source link

calico on k3s: the calico-node is running but not ready with no obvious errors #4308

Closed hickersonj closed 3 years ago

hickersonj commented 3 years ago

Expected Behavior

This is an initial install based on: https://docs.projectcalico.org/getting-started/kubernetes/k3s/quickstart calico-node should transition from ready 0 -> 1

Current Behavior

calico-node initializes but never is ready, no obvious error messages calico-node description

Name:                 calico-node-vnxhs
Namespace:            calico-system
Priority:             1000000000
Priority Class Name:  calico-priority
Node:                 nc1/172.17.0.32
Start Time:           Tue, 12 Jan 2021 09:12:34 +0000
Labels:               controller-revision-hash=58d5bb989f
                      k8s-app=calico-node
                      pod-template-generation=1
Annotations:          hash.operator.tigera.io/cni-config: 8cdc037baa5a288802a5754d2ab6c74d9b8b3ada
                      hash.operator.tigera.io/node-cert: e29af7cd10f55bae4338b7ea9f6ae8811e0fde3a
                      hash.operator.tigera.io/typha-ca: b30f9f864fbcefb76603d7651b55c7d7cd8bfd35
Status:               Running
IP:                   172.17.0.32
IPs:
  IP:           172.17.0.32
Controlled By:  DaemonSet/calico-node
Init Containers:
  flexvol-driver:
    Container ID:   containerd://ff0f929789be52479cd3b9fbbe9799477c3a018f5753edb15365cef0f2cebfe0
    Image:          docker.io/calico/pod2daemon-flexvol:v3.17.1
    Image ID:       docker.io/calico/pod2daemon-flexvol@sha256:48f277d41c35dae051d7dd6f0ec8f64ac7ee6650e27102a41b0203a0c2ce6c6b
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 12 Jan 2021 09:12:38 +0000
      Finished:     Tue, 12 Jan 2021 09:12:38 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /host/driver from flexvol-driver-host (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-7s9rf (ro)
  install-cni:
    Container ID:  containerd://aaacbbbde7383bb0f3f996902899508f6ccba199aa5b5e49655068a949a9d725
    Image:         docker.io/calico/cni:v3.17.1
    Image ID:      docker.io/calico/cni@sha256:3dc2506632843491864ce73a6e73d5bba7d0dc25ec0df00c1baa91d17549b068
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/install
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 12 Jan 2021 09:12:43 +0000
      Finished:     Tue, 12 Jan 2021 09:12:45 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      CNI_CONF_NAME:            10-calico.conflist
      SLEEP:                    false
      CNI_NET_DIR:              /etc/cni/net.d
      CNI_NETWORK_CONFIG:       <set to the key 'config' of config map 'cni-config'>  Optional: false
      KUBERNETES_SERVICE_HOST:  10.43.0.1
      KUBERNETES_SERVICE_PORT:  443
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-7s9rf (ro)
Containers:
  calico-node:
    Container ID:   containerd://99ba9b3756de5e0f022ce6f0a7e3d98ce7b46a71bd12eb2be1dc7ddd31a28ef6
    Image:          docker.io/calico/node:v3.17.1
    Image ID:       docker.io/calico/node@sha256:25e0b0495c0df3a7a06b6f9e92203c53e5b56c143ac1c885885ee84bf86285ff
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 12 Jan 2021 09:12:54 +0000
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://localhost:9099/liveness delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      exec [/bin/calico-node -bird-ready -felix-ready] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      DATASTORE_TYPE:                     kubernetes
      WAIT_FOR_DATASTORE:                 true
      CLUSTER_TYPE:                       k8s,operator,bgp
      CALICO_DISABLE_FILE_LOGGING:        true
      FELIX_DEFAULTENDPOINTTOHOSTACTION:  ACCEPT
      FELIX_HEALTHENABLED:                true
      NODENAME:                            (v1:spec.nodeName)
      NAMESPACE:                          calico-system (v1:metadata.namespace)
      FELIX_TYPHAK8SNAMESPACE:            calico-system
      FELIX_TYPHAK8SSERVICENAME:          calico-typha
      FELIX_TYPHACAFILE:                  /typha-ca/caBundle
      FELIX_TYPHACERTFILE:                /felix-certs/cert.crt
      FELIX_TYPHAKEYFILE:                 /felix-certs/key.key
      FELIX_TYPHACN:                      <set to the key 'common-name' in secret 'typha-certs'>  Optional: true
      FELIX_TYPHAURISAN:                  <set to the key 'uri-san' in secret 'typha-certs'>      Optional: true
      CALICO_IPV4POOL_CIDR:               20.28.0.0/16
      CALICO_IPV4POOL_VXLAN:              CrossSubnet
      CALICO_IPV4POOL_BLOCK_SIZE:         26
      CALICO_IPV4POOL_NODE_SELECTOR:      all()
      CALICO_NETWORKING_BACKEND:          bird
      IP:                                 autodetect
      IP_AUTODETECTION_METHOD:            first-found
      IP6:                                none
      FELIX_IPV6SUPPORT:                  false
      FELIX_IPTABLESBACKEND:              auto
      KUBERNETES_SERVICE_HOST:            10.43.0.1
      KUBERNETES_SERVICE_PORT:            443
    Mounts:
      /felix-certs from felix-certs (ro)
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /typha-ca from typha-ca (ro)
      /var/lib/calico from var-lib-calico (rw)
      /var/log/calico/cni from cni-log-dir (ro)
      /var/run/calico from var-run-calico (rw)
      /var/run/nodeagent from policysync (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-7s9rf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:
  var-run-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/calico
    HostPathType:
  var-lib-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/calico
    HostPathType:
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  policysync:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/nodeagent
    HostPathType:  DirectoryOrCreate
  typha-ca:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      typha-ca
    Optional:  false
  felix-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  node-certs
    Optional:    false
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:
  cni-log-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/calico/cni
    HostPathType:
  flexvol-driver-host:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
    HostPathType:  DirectoryOrCreate
  calico-node-token-7s9rf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  calico-node-token-7s9rf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     :NoSchedule
                 :NoExecute
                 CriticalAddonsOnly
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  <unknown>             default-scheduler  Successfully assigned calico-system/calico-node-vnxhs to nc1
  Normal   Pulling    14m                   kubelet, nc1       Pulling image "docker.io/calico/pod2daemon-flexvol:v3.17.1"
  Normal   Pulled     14m                   kubelet, nc1       Successfully pulled image "docker.io/calico/pod2daemon-flexvol:v3.17.1"
  Normal   Created    14m                   kubelet, nc1       Created container flexvol-driver
  Normal   Started    14m                   kubelet, nc1       Started container flexvol-driver
  Normal   Pulling    14m                   kubelet, nc1       Pulling image "docker.io/calico/cni:v3.17.1"
  Normal   Pulled     14m                   kubelet, nc1       Successfully pulled image "docker.io/calico/cni:v3.17.1"
  Normal   Created    14m                   kubelet, nc1       Created container install-cni
  Normal   Started    14m                   kubelet, nc1       Started container install-cni
  Normal   Pulling    14m                   kubelet, nc1       Pulling image "docker.io/calico/node:v3.17.1"
  Normal   Pulled     14m                   kubelet, nc1       Successfully pulled image "docker.io/calico/node:v3.17.1"
  Normal   Started    14m                   kubelet, nc1       Started container calico-node
  Normal   Created    14m                   kubelet, nc1       Created container calico-node
  Warning  Unhealthy  13m                   kubelet, nc1       Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "229add0d8ae2ddb83a80a94bf3cfb320f37d7dee97848c38f7aad5d64383de61": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
  Warning  Unhealthy  13m                   kubelet, nc1       Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "a397bfeb73a4638a055a6c6c8f12438466e51f0ceb0984cbe1d818fefe626301": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
  Warning  Unhealthy  13m                   kubelet, nc1       Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "4fcd3f771c0f2979c870716476aec6d5416b3afe57529792c390c792bd4ea57f": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
  Warning  Unhealthy  13m                   kubelet, nc1       Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "2c6a19447f1148331471444a8ebe8a61dc0686434dceca190e5b6bb32e5be13a": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
  Warning  Unhealthy  13m                   kubelet, nc1       Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "dff5e5407c6e3ad217dcbacb2befe155458e58a3162a83476eb1012e5ee0377d": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
  Warning  Unhealthy  13m                   kubelet, nc1       Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "52c26e79c2777f97e4c4b610976f4626a0776793244fe1de009bc9a00bee5982": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
  Warning  Unhealthy  12m                   kubelet, nc1       Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "862a7c43f6b27b24d4e2197bf058e40050afc87ff3e96798a9d7d844d19aca67": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
  Warning  Unhealthy  12m                   kubelet, nc1       Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "884b3e77753b3e0b899f64dcdaadd2d7323ea1348490b334f05841ddb66823df": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
  Warning  Unhealthy  12m                   kubelet, nc1       Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "7e2ab935618cebc97106f83fde8fc1119925df6f08118e42f40cbf07d35b0c66": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
  Warning  Unhealthy  4m21s (x49 over 12m)  kubelet, nc1       (combined from similar events): Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "1d02350cbf55a42b169ab5f25085c8f8e04379ec9eb8237f4d5581a6998a559a": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown

calico-node log

2021-01-12 09:12:54.956 [INFO][9] startup/startup.go 379: Early log level set to info
2021-01-12 09:12:54.956 [INFO][9] startup/startup.go 395: Using NODENAME environment for node name
2021-01-12 09:12:54.956 [INFO][9] startup/startup.go 407: Determined node name: nc1
2021-01-12 09:12:54.958 [INFO][9] startup/startup.go 439: Checking datastore connection
2021-01-12 09:12:54.977 [INFO][9] startup/startup.go 463: Datastore connection verified
2021-01-12 09:12:54.977 [INFO][9] startup/startup.go 112: Datastore is ready
2021-01-12 09:12:54.983 [INFO][9] startup/customresource.go 101: Error getting resource Key=GlobalFelixConfig(name=CalicoVersion) Name="calicoversion" Resource="GlobalFelixConfigs" error=the server could not find the requested resource (get GlobalFelixConfigs.crd.projectcalico.org calicoversion)
2021-01-12 09:12:54.992 [INFO][9] startup/startup.go 505: Initialize BGP data
2021-01-12 09:12:55.003 [INFO][9] startup/startup.go 759: Using autodetected IPv4 address on interface eth1.403: 172.27.40.11/24
2021-01-12 09:12:55.004 [INFO][9] startup/startup.go 576: Node IPv4 changed, will check for conflicts
2021-01-12 09:12:55.008 [INFO][9] startup/startup.go 836: No AS number configured on node resource, using global value
2021-01-12 09:12:55.008 [INFO][9] startup/startup.go 184: Setting NetworkUnavailable to False
2021-01-12 09:12:55.029 [INFO][9] startup/startup.go 697: CALICO_IPV4POOL_NAT_OUTGOING is true (defaulted) through environment variable
2021-01-12 09:12:55.029 [INFO][9] startup/startup.go 1037: Ensure default IPv4 pool is created. IPIP mode: Never, VXLAN mode: CrossSubnet
2021-01-12 09:12:55.051 [INFO][9] startup/startup.go 1047: Created default IPv4 pool (20.28.0.0/16) with NAT outgoing true. IPIP mode: Never, VXLAN mode: CrossSubnet
2021-01-12 09:12:55.051 [INFO][9] startup/startup.go 691: FELIX_IPV6SUPPORT is false through environment variable
2021-01-12 09:12:55.067 [INFO][9] startup/startup.go 217: Using node name: nc1
2021-01-12 09:12:55.152 [INFO][21] tunnel-ip-allocator/ipam.go 1325: Releasing all IPs with handle 'wireguard-tunnel-addr-nc1'
2021-01-12 09:12:55.165 [INFO][21] tunnel-ip-allocator/ipam.go 1325: Releasing all IPs with handle 'ipip-tunnel-addr-nc1'
2021-01-12 09:12:55.179 [INFO][21] tunnel-ip-allocator/allocateip.go 267: Assign a new tunnel address type="vxlanTunnelAddress"
2021-01-12 09:12:55.179 [INFO][21] tunnel-ip-allocator/allocateip.go 343: Release any old tunnel addresses IP="" type="vxlanTunnelAddress"
2021-01-12 09:12:55.179 [INFO][21] tunnel-ip-allocator/ipam.go 1325: Releasing all IPs with handle 'vxlan-tunnel-addr-nc1'
2021-01-12 09:12:55.182 [INFO][21] tunnel-ip-allocator/allocateip.go 354: Assign new tunnel address IP="" type="vxlanTunnelAddress"
2021-01-12 09:12:55.182 [INFO][21] tunnel-ip-allocator/ipam.go 92: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'nc1'
2021-01-12 09:12:55.182 [INFO][21] tunnel-ip-allocator/ipam.go 548: Looking up existing affinities for host handle="vxlan-tunnel-addr-nc1" host="nc1"
2021-01-12 09:12:55.187 [INFO][21] tunnel-ip-allocator/ipam.go 346: Looking up existing affinities for host host="nc1"
2021-01-12 09:12:55.193 [INFO][21] tunnel-ip-allocator/ipam.go 460: Ran out of existing affine blocks for host host="nc1"
2021-01-12 09:12:55.195 [INFO][21] tunnel-ip-allocator/ipam.go 475: No more affine blocks, but need to claim more block -- allocate another block host="nc1"
2021-01-12 09:12:55.195 [INFO][21] tunnel-ip-allocator/ipam.go 478: Looking for an unclaimed block host="nc1"
2021-01-12 09:12:55.199 [INFO][21] tunnel-ip-allocator/ipam_block_reader_writer.go 124: Found free block: 20.28.116.64/26
2021-01-12 09:12:55.199 [INFO][21] tunnel-ip-allocator/ipam.go 490: Found unclaimed block host="nc1" subnet=20.28.116.64/26
2021-01-12 09:12:55.199 [INFO][21] tunnel-ip-allocator/ipam_block_reader_writer.go 137: Trying to create affinity in pending state host="nc1" subnet=20.28.116.64/26
2021-01-12 09:12:55.205 [INFO][21] tunnel-ip-allocator/ipam_block_reader_writer.go 167: Successfully created pending affinity for block host="nc1" subnet=20.28.116.64/26
2021-01-12 09:12:55.205 [INFO][21] tunnel-ip-allocator/ipam.go 140: Attempting to load block cidr=20.28.116.64/26 host="nc1"
2021-01-12 09:12:55.208 [INFO][21] tunnel-ip-allocator/ipam.go 145: The referenced block doesn't exist, trying to create it cidr=20.28.116.64/26 host="nc1"
2021-01-12 09:12:55.213 [INFO][21] tunnel-ip-allocator/ipam.go 152: Wrote affinity as pending cidr=20.28.116.64/26 host="nc1"
2021-01-12 09:12:55.215 [INFO][21] tunnel-ip-allocator/ipam.go 161: Attempting to claim the block cidr=20.28.116.64/26 host="nc1"
2021-01-12 09:12:55.215 [INFO][21] tunnel-ip-allocator/ipam_block_reader_writer.go 189: Attempting to create a new block host="nc1" subnet=20.28.116.64/26
2021-01-12 09:12:55.221 [INFO][21] tunnel-ip-allocator/ipam_block_reader_writer.go 230: Successfully created block
2021-01-12 09:12:55.221 [INFO][21] tunnel-ip-allocator/ipam_block_reader_writer.go 241: Confirming affinity host="nc1" subnet=20.28.116.64/26
2021-01-12 09:12:55.225 [INFO][21] tunnel-ip-allocator/ipam_block_reader_writer.go 256: Successfully confirmed affinity host="nc1" subnet=20.28.116.64/26
2021-01-12 09:12:55.225 [INFO][21] tunnel-ip-allocator/ipam.go 524: Block '20.28.116.64/26' has 64 free ips which is more than 1 ips required. host="nc1" subnet=20.28.116.64/26
2021-01-12 09:12:55.225 [INFO][21] tunnel-ip-allocator/ipam.go 947: Attempting to assign 1 addresses from block block=20.28.116.64/26 handle="vxlan-tunnel-addr-nc1" host="nc1"
2021-01-12 09:12:55.228 [INFO][21] tunnel-ip-allocator/ipam.go 1424: Creating new handle: vxlan-tunnel-addr-nc1
2021-01-12 09:12:55.232 [INFO][21] tunnel-ip-allocator/ipam.go 970: Writing block in order to claim IPs block=20.28.116.64/26 handle="vxlan-tunnel-addr-nc1" host="nc1"
2021-01-12 09:12:55.239 [INFO][21] tunnel-ip-allocator/ipam.go 983: Successfully claimed IPs: [20.28.116.64/26] block=20.28.116.64/26 handle="vxlan-tunnel-addr-nc1" host="nc1"
2021-01-12 09:12:55.239 [INFO][21] tunnel-ip-allocator/ipam.go 706: Auto-assigned 1 out of 1 IPv4s: [20.28.116.64/26] handle="vxlan-tunnel-addr-nc1" host="nc1"
2021-01-12 09:12:55.252 [INFO][21] tunnel-ip-allocator/allocateip.go 441: Assigned tunnel address to node IP="20.28.116.64" type="vxlanTunnelAddress"
Calico node started successfully
bird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory
bird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory
2021-01-12 09:12:56.345 [INFO][56] tunnel-ip-allocator/config.go 60: Found FELIX_TYPHAK8SSERVICENAME=calico-typha
2021-01-12 09:12:56.345 [INFO][56] tunnel-ip-allocator/config.go 60: Found FELIX_TYPHAK8SNAMESPACE=calico-system
2021-01-12 09:12:56.345 [INFO][56] tunnel-ip-allocator/config.go 60: Found FELIX_TYPHAKEYFILE=/felix-certs/key.key
2021-01-12 09:12:56.345 [INFO][56] tunnel-ip-allocator/config.go 60: Found FELIX_TYPHACERTFILE=/felix-certs/cert.crt
2021-01-12 09:12:56.345 [INFO][56] tunnel-ip-allocator/config.go 60: Found FELIX_TYPHACAFILE=/typha-ca/caBundle
2021-01-12 09:12:56.345 [INFO][56] tunnel-ip-allocator/config.go 60: Found FELIX_TYPHACN=typha-server
2021-01-12 09:12:56.354 [INFO][55] confd/config.go 60: Found FELIX_TYPHAK8SSERVICENAME=calico-typha
2021-01-12 09:12:56.354 [INFO][55] confd/config.go 60: Found FELIX_TYPHAK8SNAMESPACE=calico-system
2021-01-12 09:12:56.354 [INFO][55] confd/config.go 60: Found FELIX_TYPHAKEYFILE=/felix-certs/key.key
2021-01-12 09:12:56.354 [INFO][55] confd/config.go 60: Found FELIX_TYPHACERTFILE=/felix-certs/cert.crt
2021-01-12 09:12:56.354 [INFO][55] confd/config.go 60: Found FELIX_TYPHACAFILE=/typha-ca/caBundle
2021-01-12 09:12:56.355 [INFO][55] confd/config.go 60: Found FELIX_TYPHACN=typha-server
2021-01-12 09:12:56.355 [INFO][55] confd/config.go 81: Skipping confd config file.
2021-01-12 09:12:56.355 [INFO][55] confd/run.go 17: Starting calico-confd
2021-01-12 09:12:56.371 [INFO][57] monitor-addresses/startup.go 395: Using NODENAME environment for node name
2021-01-12 09:12:56.371 [INFO][57] monitor-addresses/startup.go 407: Determined node name: nc1
2021-01-12 09:12:56.378 [INFO][56] tunnel-ip-allocator/discovery.go 162: Found ready Typha addresses. addrs=[]string{"172.17.0.32:5473"}
2021-01-12 09:12:56.378 [INFO][56] tunnel-ip-allocator/discovery.go 165: Chose Typha to connect to. choice="172.17.0.32:5473"
2021-01-12 09:12:56.378 [INFO][56] tunnel-ip-allocator/startsyncerclient.go 56: Connecting to Typha. addr="172.17.0.32:5473"
2021-01-12 09:12:56.378 [INFO][56] tunnel-ip-allocator/sync_client.go 71:  requiringTLS=true
2021-01-12 09:12:56.378 [INFO][56] tunnel-ip-allocator/sync_client.go 200: Starting Typha client
2021-01-12 09:12:56.378 [INFO][56] tunnel-ip-allocator/sync_client.go 71:  requiringTLS=true
2021-01-12 09:12:56.379 [INFO][56] tunnel-ip-allocator/tlsutils.go 39: Make certificate verifier requiredCN="typha-server" requiredURISAN="" roots=&x509.CertPool{bySubjectKeyId:map[string][]int{"\xf7\x8bVN\x8f\xae\x8dQ\xe9ˤ\xa8y\x17\xf6\xe4\x1cA34":[]int{0}}, byName:map[string][]int{"0,1*0(\x06\x03U\x04\x03\f!tigera-operator-signer@1610442752":[]int{0}}, certs:[]*x509.Certificate{(*x509.Certificate)(0xc0005fb180)}}
2021-01-12 09:12:56.379 [INFO][56] tunnel-ip-allocator/sync_client.go 251: Connecting to Typha. address="172.17.0.32:5473" connID=0x0 type="tunnel-ip-allocation"
2021-01-12 09:12:56.385 [INFO][56] tunnel-ip-allocator/tlsutils.go 46: Verify certificate chain signing address="172.17.0.32:5473" connID=0x0 type="tunnel-ip-allocation"
W0112 09:12:56.386054      55 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2021-01-12 09:12:56.387 [INFO][55] confd/client.go 1168: Updated with new cluster IP CIDRs: []
2021-01-12 09:12:56.387 [INFO][55] confd/client.go 1159: Updated with new external IP CIDRs: []
2021-01-12 09:12:56.393 [INFO][56] tunnel-ip-allocator/sync_client.go 266: Connected to Typha. address="172.17.0.32:5473" connID=0x0 type="tunnel-ip-allocation"
2021-01-12 09:12:56.393 [INFO][56] tunnel-ip-allocator/sync_client.go 300: Started Typha client main loop address="172.17.0.32:5473" connID=0x0 type="tunnel-ip-allocation"
2021-01-12 09:12:56.394 [INFO][56] tunnel-ip-allocator/sync_client.go 357: Server hello message received address="172.17.0.32:5473" connID=0x0 serverVersion="v3.17.1" type="tunnel-ip-allocation"
2021-01-12 09:12:56.395 [INFO][56] tunnel-ip-allocator/sync_client.go 328: Status update from Typha. address="172.17.0.32:5473" connID=0x0 newStatus=in-sync type="tunnel-ip-allocation"
2021-01-12 09:12:56.396 [INFO][55] confd/discovery.go 162: Found ready Typha addresses. addrs=[]string{"172.17.0.32:5473"}
2021-01-12 09:12:56.396 [INFO][55] confd/discovery.go 165: Chose Typha to connect to. choice="172.17.0.32:5473"
2021-01-12 09:12:56.396 [INFO][55] confd/startsyncerclient.go 56: Connecting to Typha. addr="172.17.0.32:5473"
2021-01-12 09:12:56.396 [INFO][55] confd/sync_client.go 71:  requiringTLS=true
2021-01-12 09:12:56.397 [INFO][55] confd/sync_client.go 200: Starting Typha client
2021-01-12 09:12:56.397 [INFO][55] confd/sync_client.go 71:  requiringTLS=true
2021-01-12 09:12:56.397 [INFO][55] confd/tlsutils.go 39: Make certificate verifier requiredCN="typha-server" requiredURISAN="" roots=&x509.CertPool{bySubjectKeyId:map[string][]int{"\xf7\x8bVN\x8f\xae\x8dQ\xe9ˤ\xa8y\x17\xf6\xe4\x1cA34":[]int{0}}, byName:map[string][]int{"0,1*0(\x06\x03U\x04\x03\f!tigera-operator-signer@1610442752":[]int{0}}, certs:[]*x509.Certificate{(*x509.Certificate)(0xc000670580)}}
2021-01-12 09:12:56.398 [INFO][55] confd/sync_client.go 251: Connecting to Typha. address="172.17.0.32:5473" connID=0x0 type="bgp"
2021-01-12 09:12:56.403 [INFO][55] confd/tlsutils.go 46: Verify certificate chain signing address="172.17.0.32:5473" connID=0x0 type="bgp"
2021-01-12 09:12:56.408 [INFO][56] tunnel-ip-allocator/ipam.go 1325: Releasing all IPs with handle 'wireguard-tunnel-addr-nc1'
2021-01-12 09:12:56.412 [INFO][55] confd/sync_client.go 266: Connected to Typha. address="172.17.0.32:5473" connID=0x0 type="bgp"
2021-01-12 09:12:56.412 [INFO][55] confd/client.go 351: Source SourceRouteGenerator readiness changed, ready=true
2021-01-12 09:12:56.412 [INFO][55] confd/sync_client.go 300: Started Typha client main loop address="172.17.0.32:5473" connID=0x0 type="bgp"
2021-01-12 09:12:56.414 [INFO][55] confd/sync_client.go 357: Server hello message received address="172.17.0.32:5473" connID=0x0 serverVersion="v3.17.1" type="bgp"
2021-01-12 09:12:56.415 [INFO][55] confd/sync_client.go 328: Status update from Typha. address="172.17.0.32:5473" connID=0x0 newStatus=in-sync type="bgp"
2021-01-12 09:12:56.416 [INFO][55] confd/client.go 877: Recompute BGP peerings: HostBGPConfig(node=nc1; name=ip_addr_v4) updated; HostBGPConfig(node=nc1; name=ip_addr_v6) updated; HostBGPConfig(node=nc1; name=network_v4) updated; HostBGPConfig(node=nc1; name=rr_cluster_id) updated; nc1 updated
2021-01-12 09:12:56.416 [INFO][55] confd/client.go 351: Source SourceSyncer readiness changed, ready=true
2021-01-12 09:12:56.416 [INFO][55] confd/client.go 371: Data is now syncd, can start rendering templates
2021-01-12 09:12:56.429 [INFO][55] confd/resource.go 277: Target config /etc/calico/confd/config/bird_aggr.cfg has been updated
2021-01-12 09:12:56.429 [INFO][56] tunnel-ip-allocator/ipam.go 1325: Releasing all IPs with handle 'ipip-tunnel-addr-nc1'
2021-01-12 09:12:56.430 [INFO][55] confd/resource.go 277: Target config /etc/calico/confd/config/bird_ipam.cfg has been updated
2021-01-12 09:12:56.431 [INFO][55] confd/resource.go 277: Target config /etc/calico/confd/config/bird6_aggr.cfg has been updated
2021-01-12 09:12:56.433 [INFO][55] confd/resource.go 277: Target config /etc/calico/confd/config/bird6_ipam.cfg has been updated
2021-01-12 09:12:56.436 [INFO][55] confd/resource.go 277: Target config /etc/calico/confd/config/bird.cfg has been updated
2021-01-12 09:12:56.437 [INFO][55] confd/resource.go 277: Target config /etc/calico/confd/config/bird6.cfg has been updated
2021-01-12 09:12:56.450 [INFO][60] felix/daemon.go 357: Successfully loaded configuration. GOMAXPROCS=8 builddate="345226a3c1809f6981fd279d81c475d53f894caf" config=&config.Config{UseInternalDataplaneDriver:true, DataplaneDriver:"calico-iptables-plugin", WireguardEnabled:false, WireguardListeningPort:51820, WireguardRoutingRulePriority:99, WireguardInterfaceName:"wireguard.cali", WireguardMTU:0, BPFEnabled:false, BPFDisableUnprivileged:true, BPFLogLevel:"off", BPFDataIfacePattern:(*regexp.Regexp)(0xc0005c1900), BPFConnectTimeLoadBalancingEnabled:true, BPFExternalServiceMode:"tunnel", BPFKubeProxyIptablesCleanupEnabled:true, BPFKubeProxyMinSyncPeriod:1000000000, BPFKubeProxyEndpointSlicesEnabled:false, DebugBPFCgroupV2:"", DebugBPFMapRepinEnabled:true, DatastoreType:"kubernetes", FelixHostname:"nc1", EtcdAddr:"127.0.0.1:2379", EtcdScheme:"http", EtcdKeyFile:"", EtcdCertFile:"", EtcdCaFile:"", EtcdEndpoints:[]string(nil), TyphaAddr:"", TyphaK8sServiceName:"calico-typha", TyphaK8sNamespace:"calico-system", TyphaReadTimeout:30000000000, TyphaWriteTimeout:10000000000, TyphaKeyFile:"/felix-certs/key.key", TyphaCertFile:"/felix-certs/cert.crt", TyphaCAFile:"/typha-ca/caBundle", TyphaCN:"typha-server", TyphaURISAN:"", Ipv6Support:false, IptablesBackend:"auto", RouteRefreshInterval:90000000000, InterfaceRefreshInterval:90000000000, DeviceRouteSourceAddress:net.IP(nil), DeviceRouteProtocol:3, RemoveExternalRoutes:true, IptablesRefreshInterval:90000000000, IptablesPostWriteCheckIntervalSecs:1000000000, IptablesLockFilePath:"/run/xtables.lock", IptablesLockTimeoutSecs:0, IptablesLockProbeIntervalMillis:50000000, FeatureDetectOverride:map[string]string(nil), IpsetsRefreshInterval:10000000000, MaxIpsetSize:1048576, XDPRefreshInterval:90000000000, PolicySyncPathPrefix:"", NetlinkTimeoutSecs:10000000000, MetadataAddr:"", MetadataPort:8775, OpenstackRegion:"", InterfacePrefix:"cali", InterfaceExclude:[]*regexp.Regexp{(*regexp.Regexp)(0xc0005c1b80)}, ChainInsertMode:"insert", DefaultEndpointToHostAction:"ACCEPT", IptablesFilterAllowAction:"ACCEPT", IptablesMangleAllowAction:"ACCEPT", LogPrefix:"calico-packet", LogFilePath:"", LogSeverityFile:"", LogSeverityScreen:"INFO", LogSeveritySys:"", VXLANEnabled:true, VXLANPort:4789, VXLANVNI:4096, VXLANMTU:0, IPv4VXLANTunnelAddr:net.IP{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0xff, 0x14, 0x1c, 0x74, 0x40}, VXLANTunnelMACAddr:"", IpInIpEnabled:false, IpInIpMtu:0, IpInIpTunnelAddr:net.IP(nil), AllowVXLANPacketsFromWorkloads:false, AllowIPIPPacketsFromWorkloads:false, AWSSrcDstCheck:"DoNothing", ServiceLoopPrevention:"Drop", ReportingIntervalSecs:0, ReportingTTLSecs:90000000000, EndpointReportingEnabled:false, EndpointReportingDelaySecs:1000000000, IptablesMarkMask:0xffff0000, DisableConntrackInvalidCheck:false, HealthEnabled:true, HealthPort:9099, HealthHost:"localhost", PrometheusMetricsEnabled:false, PrometheusMetricsHost:"", PrometheusMetricsPort:9091, PrometheusGoMetricsEnabled:true, PrometheusProcessMetricsEnabled:true, FailsafeInboundHostPorts:[]config.ProtoPort{config.ProtoPort{Protocol:"tcp", Port:0x16}, config.ProtoPort{Protocol:"udp", Port:0x44}, config.ProtoPort{Protocol:"tcp", Port:0xb3}, config.ProtoPort{Protocol:"tcp", Port:0x94b}, config.ProtoPort{Protocol:"tcp", Port:0x94c}, config.ProtoPort{Protocol:"tcp", Port:0x1561}, config.ProtoPort{Protocol:"tcp", Port:0x192b}, config.ProtoPort{Protocol:"tcp", Port:0x1a0a}, config.ProtoPort{Protocol:"tcp", Port:0x1a0b}}, FailsafeOutboundHostPorts:[]config.ProtoPort{config.ProtoPort{Protocol:"udp", Port:0x35}, config.ProtoPort{Protocol:"udp", Port:0x43}, config.ProtoPort{Protocol:"tcp", Port:0xb3}, config.ProtoPort{Protocol:"tcp", Port:0x94b}, config.ProtoPort{Protocol:"tcp", Port:0x94c}, config.ProtoPort{Protocol:"tcp", Port:0x1561}, config.ProtoPort{Protocol:"tcp", Port:0x192b}, config.ProtoPort{Protocol:"tcp", Port:0x1a0a}, config.ProtoPort{Protocol:"tcp", Port:0x1a0b}}, KubeNodePortRanges:[]numorstring.Port{numorstring.Port{MinPort:0x7530, MaxPort:0x7fff, PortName:""}}, NATPortRange:numorstring.Port{MinPort:0x0, MaxPort:0x0, PortName:""}, NATOutgoingAddress:net.IP(nil), UsageReportingEnabled:true, UsageReportingInitialDelaySecs:300000000000, UsageReportingIntervalSecs:86400000000000, ClusterGUID:"6c70b42e3f974469ac2efe1bfa298127", ClusterType:"typha,kdd,k8s,operator,bgp", CalicoVersion:"v3.17.1", ExternalNodesCIDRList:[]string(nil), DebugMemoryProfilePath:"", DebugCPUProfilePath:"/tmp/felix-cpu-<timestamp>.pprof", DebugDisableLogDropping:false, DebugSimulateCalcGraphHangAfter:0, DebugSimulateDataplaneHangAfter:0, DebugPanicAfter:0, DebugSimulateDataRace:false, RouteSource:"CalicoIPAM", RouteTableRange:idalloc.IndexRange{Min:1, Max:250}, IptablesNATOutgoingInterfaceFilter:"", SidecarAccelerationEnabled:false, XDPEnabled:true, GenericXDPEnabled:false, Variant:"Calico", MTUIfacePattern:(*regexp.Regexp)(0xc0005c1f40), internalOverrides:map[string]string{}, sourceToRawConfig:map[config.Source]map[string]string{0x1:map[string]string{"CalicoVersion":"v3.17.1", "ClusterGUID":"6c70b42e3f974469ac2efe1bfa298127", "ClusterType":"typha,kdd,k8s,operator,bgp", "LogSeverityScreen":"Info", "ReportingIntervalSecs":"0", "VXLANEnabled":"true"}, 0x2:map[string]string{"IPv4VXLANTunnelAddr":"20.28.116.64"}, 0x3:map[string]string{"LogFilePath":"None", "LogSeverityFile":"None", "LogSeveritySys":"None", "MetadataAddr":"None"}, 0x4:map[string]string{"datastoretype":"kubernetes", "defaultendpointtohostaction":"ACCEPT", "felixhostname":"nc1", "healthenabled":"true", "iptablesbackend":"auto", "ipv6support":"false", "typhacafile":"/typha-ca/caBundle", "typhacertfile":"/felix-certs/cert.crt", "typhacn":"typha-server", "typhak8snamespace":"calico-system", "typhak8sservicename":"calico-typha", "typhakeyfile":"/felix-certs/key.key"}}, rawValues:map[string]string{"CalicoVersion":"v3.17.1", "ClusterGUID":"6c70b42e3f974469ac2efe1bfa298127", "ClusterType":"typha,kdd,k8s,operator,bgp", "DatastoreType":"kubernetes", "DefaultEndpointToHostAction":"ACCEPT", "FelixHostname":"nc1", "HealthEnabled":"true", "IPv4VXLANTunnelAddr":"20.28.116.64", "IptablesBackend":"auto", "Ipv6Support":"false", "LogFilePath":"None", "LogSeverityFile":"None", "LogSeverityScreen":"Info", "LogSeveritySys":"None", "MetadataAddr":"None", "ReportingIntervalSecs":"0", "TyphaCAFile":"/typha-ca/caBundle", "TyphaCN":"typha-server", "TyphaCertFile":"/felix-certs/cert.crt", "TyphaK8sNamespace":"calico-system", "TyphaK8sServiceName":"calico-typha", "TyphaKeyFile":"/felix-certs/key.key", "VXLANEnabled":"true"}, Err:error(nil), loadClientConfigFromEnvironment:(func() (*apiconfig.CalicoAPIConfig, error))(0x12f9640), useNodeResourceUpdates:false} gitcommit="2020-12-10T23:50:58+0000" version="v3.17.1"
2021-01-12 09:12:56.451 [INFO][60] felix/driver.go 59: Using internal (linux) dataplane driver.
2021-01-12 09:12:56.451 [INFO][60] felix/driver.go 147: Calculated iptables mark bits acceptMark=0x10000 endpointMark=0xfff00000 endpointMarkNonCali=0x0 passMark=0x20000 scratch0Mark=0x40000 scratch1Mark=0x80000
2021-01-12 09:12:56.452 [INFO][56] tunnel-ip-allocator/allocateip.go 291: Current address is still valid, do nothing currentAddr="20.28.116.64" type="vxlanTunnelAddress"
2021-01-12 09:12:56.451 [INFO][60] felix/int_dataplane.go 282: Creating internal dataplane driver. config=intdataplane.Config{Hostname:"nc1", IPv6Enabled:false, RuleRendererOverride:rules.RuleRenderer(nil), IPIPMTU:0, VXLANMTU:0, MaxIPSetSize:1048576, IptablesBackend:"auto", IPSetsRefreshInterval:10000000000, RouteRefreshInterval:90000000000, DeviceRouteSourceAddress:net.IP(nil), DeviceRouteProtocol:3, RemoveExternalRoutes:true, IptablesRefreshInterval:90000000000, IptablesPostWriteCheckInterval:1000000000, IptablesInsertMode:"insert", IptablesLockFilePath:"/run/xtables.lock", IptablesLockTimeout:0, IptablesLockProbeInterval:50000000, XDPRefreshInterval:90000000000, Wireguard:wireguard.Config{Enabled:false, ListeningPort:51820, FirewallMark:0, RoutingRulePriority:99, RoutingTableIndex:1, InterfaceName:"wireguard.cali", MTU:0}, NetlinkTimeout:10000000000, RulesConfig:rules.Config{IPSetConfigV4:(*ipsets.IPVersionConfig)(0xc0006d4690), IPSetConfigV6:(*ipsets.IPVersionConfig)(0xc0006d4780), WorkloadIfacePrefixes:[]string{"cali"}, IptablesMarkAccept:0x10000, IptablesMarkPass:0x20000, IptablesMarkScratch0:0x40000, IptablesMarkScratch1:0x80000, IptablesMarkEndpoint:0xfff00000, IptablesMarkNonCaliEndpoint:0x0, KubeNodePortRanges:[]numorstring.Port{numorstring.Port{MinPort:0x7530, MaxPort:0x7fff, PortName:""}}, KubeIPVSSupportEnabled:false, OpenStackMetadataIP:net.IP(nil), OpenStackMetadataPort:0x2247, OpenStackSpecialCasesEnabled:false, VXLANEnabled:true, VXLANPort:4789, VXLANVNI:4096, IPIPEnabled:false, IPIPTunnelAddress:net.IP(nil), VXLANTunnelAddress:net.IP{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0xff, 0x14, 0x1c, 0x74, 0x40}, AllowVXLANPacketsFromWorkloads:false, AllowIPIPPacketsFromWorkloads:false, WireguardEnabled:false, WireguardInterfaceName:"wireguard.cali", IptablesLogPrefix:"calico-packet", EndpointToHostAction:"ACCEPT", IptablesFilterAllowAction:"ACCEPT", IptablesMangleAllowAction:"ACCEPT", FailsafeInboundHostPorts:[]config.ProtoPort{config.ProtoPort{Protocol:"tcp", Port:0x16}, config.ProtoPort{Protocol:"udp", Port:0x44}, config.ProtoPort{Protocol:"tcp", Port:0xb3}, config.ProtoPort{Protocol:"tcp", Port:0x94b}, config.ProtoPort{Protocol:"tcp", Port:0x94c}, config.ProtoPort{Protocol:"tcp", Port:0x1561}, config.ProtoPort{Protocol:"tcp", Port:0x192b}, config.ProtoPort{Protocol:"tcp", Port:0x1a0a}, config.ProtoPort{Protocol:"tcp", Port:0x1a0b}}, FailsafeOutboundHostPorts:[]config.ProtoPort{config.ProtoPort{Protocol:"udp", Port:0x35}, config.ProtoPort{Protocol:"udp", Port:0x43}, config.ProtoPort{Protocol:"tcp", Port:0xb3}, config.ProtoPort{Protocol:"tcp", Port:0x94b}, config.ProtoPort{Protocol:"tcp", Port:0x94c}, config.ProtoPort{Protocol:"tcp", Port:0x1561}, config.ProtoPort{Protocol:"tcp", Port:0x192b}, config.ProtoPort{Protocol:"tcp", Port:0x1a0a}, config.ProtoPort{Protocol:"tcp", Port:0x1a0b}}, DisableConntrackInvalid:false, NATPortRange:numorstring.Port{MinPort:0x0, MaxPort:0x0, PortName:""}, IptablesNATOutgoingInterfaceFilter:"", NATOutgoingAddress:net.IP(nil), BPFEnabled:false, ServiceLoopPrevention:"Drop"}, IfaceMonitorConfig:ifacemonitor.Config{InterfaceExcludes:[]*regexp.Regexp{(*regexp.Regexp)(0xc0005c1b80)}, ResyncInterval:90000000000}, StatusReportingInterval:0, ConfigChangedRestartCallback:(func())(0x1e17000), PostInSyncCallback:(func())(0x1dfe400), HealthAggregator:(*health.HealthAggregator)(0xc00020cb70), RouteTableManager:(*idalloc.IndexAllocator)(0xc000010fb8), DebugSimulateDataplaneHangAfter:0, ExternalNodesCidrs:[]string(nil), BPFEnabled:false, BPFDisableUnprivileged:true, BPFKubeProxyIptablesCleanupEnabled:true, BPFLogLevel:"off", BPFDataIfacePattern:(*regexp.Regexp)(0xc0005c1900), XDPEnabled:true, XDPAllowGeneric:false, BPFConntrackTimeouts:conntrack.Timeouts{CreationGracePeriod:10000000000, TCPPreEstablished:20000000000, TCPEstablished:3600000000000, TCPFinsSeen:30000000000, TCPResetSeen:40000000000, UDPLastSeen:60000000000, ICMPLastSeen:5000000000}, BPFCgroupV2:"", BPFConnTimeLBEnabled:true, BPFMapRepin:true, BPFNodePortDSREnabled:false, KubeProxyMinSyncPeriod:1000000000, KubeProxyEndpointSlicesEnabled:false, SidecarAccelerationEnabled:false, LookPathOverride:(func(string) (string, error))(nil), KubeClientSet:(*kubernetes.Clientset)(0xc000159080), FeatureDetectOverrides:map[string]string(nil), hostMTU:0, MTUIfacePattern:(*regexp.Regexp)(0xc0005c1f40)}
2021-01-12 09:12:56.452 [INFO][60] felix/rule_defs.go 338: Creating rule renderer. config=rules.Config{IPSetConfigV4:(*ipsets.IPVersionConfig)(0xc0006d4690), IPSetConfigV6:(*ipsets.IPVersionConfig)(0xc0006d4780), WorkloadIfacePrefixes:[]string{"cali"}, IptablesMarkAccept:0x10000, IptablesMarkPass:0x20000, IptablesMarkScratch0:0x40000, IptablesMarkScratch1:0x80000, IptablesMarkEndpoint:0xfff00000, IptablesMarkNonCaliEndpoint:0x0, KubeNodePortRanges:[]numorstring.Port{numorstring.Port{MinPort:0x7530, MaxPort:0x7fff, PortName:""}}, KubeIPVSSupportEnabled:false, OpenStackMetadataIP:net.IP(nil), OpenStackMetadataPort:0x2247, OpenStackSpecialCasesEnabled:false, VXLANEnabled:true, VXLANPort:4789, VXLANVNI:4096, IPIPEnabled:false, IPIPTunnelAddress:net.IP(nil), VXLANTunnelAddress:net.IP{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0xff, 0x14, 0x1c, 0x74, 0x40}, AllowVXLANPacketsFromWorkloads:false, AllowIPIPPacketsFromWorkloads:false, WireguardEnabled:false, WireguardInterfaceName:"wireguard.cali", IptablesLogPrefix:"calico-packet", EndpointToHostAction:"ACCEPT", IptablesFilterAllowAction:"ACCEPT", IptablesMangleAllowAction:"ACCEPT", FailsafeInboundHostPorts:[]config.ProtoPort{config.ProtoPort{Protocol:"tcp", Port:0x16}, config.ProtoPort{Protocol:"udp", Port:0x44}, config.ProtoPort{Protocol:"tcp", Port:0xb3}, config.ProtoPort{Protocol:"tcp", Port:0x94b}, config.ProtoPort{Protocol:"tcp", Port:0x94c}, config.ProtoPort{Protocol:"tcp", Port:0x1561}, config.ProtoPort{Protocol:"tcp", Port:0x192b}, config.ProtoPort{Protocol:"tcp", Port:0x1a0a}, config.ProtoPort{Protocol:"tcp", Port:0x1a0b}}, FailsafeOutboundHostPorts:[]config.ProtoPort{config.ProtoPort{Protocol:"udp", Port:0x35}, config.ProtoPort{Protocol:"udp", Port:0x43}, config.ProtoPort{Protocol:"tcp", Port:0xb3}, config.ProtoPort{Protocol:"tcp", Port:0x94b}, config.ProtoPort{Protocol:"tcp", Port:0x94c}, config.ProtoPort{Protocol:"tcp", Port:0x1561}, config.ProtoPort{Protocol:"tcp", Port:0x192b}, config.ProtoPort{Protocol:"tcp", Port:0x1a0a}, config.ProtoPort{Protocol:"tcp", Port:0x1a0b}}, DisableConntrackInvalid:false, NATPortRange:numorstring.Port{MinPort:0x0, MaxPort:0x0, PortName:""}, IptablesNATOutgoingInterfaceFilter:"", NATOutgoingAddress:net.IP(nil), BPFEnabled:false, ServiceLoopPrevention:"Drop"}
2021-01-12 09:12:56.452 [INFO][60] felix/rule_defs.go 348: Workload to host packets will be accepted.
2021-01-12 09:12:56.452 [INFO][60] felix/rule_defs.go 362: filter table allowed packets will be accepted immediately.
2021-01-12 09:12:56.452 [INFO][60] felix/rule_defs.go 370: mangle table allowed packets will be accepted immediately.
2021-01-12 09:12:56.452 [INFO][60] felix/rule_defs.go 378: Packets to unknown service IPs will be dropped
2021-01-12 09:12:56.454 [INFO][60] felix/int_dataplane.go 868: Determined pod MTU mtu=1450
2021-01-12 09:12:56.454 [INFO][60] felix/iface_monitor.go 72: configured to periodically rescan interfaces. interval=1m30s
2021-01-12 09:12:56.454 [INFO][60] felix/feature_detect.go 259: Looked up iptables command backendMode="legacy" candidates=[]string{"ip6tables-legacy-save", "ip6tables-save"} command="ip6tables-legacy-save" ipVersion=0x6 saveOrRestore="save"
2021-01-12 09:12:56.454 [INFO][60] felix/feature_detect.go 259: Looked up iptables command backendMode="legacy" candidates=[]string{"iptables-legacy-save", "iptables-save"} command="iptables-legacy-save" ipVersion=0x4 saveOrRestore="save"
2021-01-12 09:12:56.462 [INFO][60] felix/feature_detect.go 137: Updating detected iptables features features=iptables.Features{SNATFullyRandom:true, MASQFullyRandom:true, RestoreSupportsLock:true} iptablesVersion=1.8.2 kernelVersion=4.19.78
2021-01-12 09:12:56.463 [INFO][60] felix/table.go 329: Calculated old-insert detection regex. pattern="(?:-j|--jump) cali-|(?:-j|--jump) califw-|(?:-j|--jump) calitw-|(?:-j|--jump) califh-|(?:-j|--jump) calith-|(?:-j|--jump) calipi-|(?:-j|--jump) calipo-|(?:-j|--jump) felix-"
2021-01-12 09:12:56.463 [INFO][60] felix/feature_detect.go 259: Looked up iptables command backendMode="legacy" candidates=[]string{"iptables-legacy-restore", "iptables-restore"} command="iptables-legacy-restore" ipVersion=0x4 saveOrRestore="restore"
2021-01-12 09:12:56.463 [INFO][60] felix/feature_detect.go 259: Looked up iptables command backendMode="legacy" candidates=[]string{"iptables-legacy-save", "iptables-save"} command="iptables-legacy-save" ipVersion=0x4 saveOrRestore="save"
2021-01-12 09:12:56.463 [INFO][60] felix/table.go 329: Calculated old-insert detection regex. pattern="(?:-j|--jump) cali-|(?:-j|--jump) califw-|(?:-j|--jump) calitw-|(?:-j|--jump) califh-|(?:-j|--jump) calith-|(?:-j|--jump) calipi-|(?:-j|--jump) calipo-|(?:-j|--jump) felix-|-A POSTROUTING .* felix-masq-ipam-pools .*|-A POSTROUTING -o tunl0 -m addrtype ! --src-type LOCAL --limit-iface-out -m addrtype --src-type LOCAL -j MASQUERADE"
2021-01-12 09:12:56.463 [INFO][60] felix/feature_detect.go 259: Looked up iptables command backendMode="legacy" candidates=[]string{"iptables-legacy-restore", "iptables-restore"} command="iptables-legacy-restore" ipVersion=0x4 saveOrRestore="restore"
2021-01-12 09:12:56.463 [INFO][60] felix/feature_detect.go 259: Looked up iptables command backendMode="legacy" candidates=[]string{"iptables-legacy-save", "iptables-save"} command="iptables-legacy-save" ipVersion=0x4 saveOrRestore="save"
2021-01-12 09:12:56.463 [INFO][60] felix/table.go 329: Calculated old-insert detection regex. pattern="(?:-j|--jump) cali-|(?:-j|--jump) califw-|(?:-j|--jump) calitw-|(?:-j|--jump) califh-|(?:-j|--jump) calith-|(?:-j|--jump) calipi-|(?:-j|--jump) calipo-|(?:-j|--jump) felix-"
2021-01-12 09:12:56.463 [INFO][60] felix/feature_detect.go 259: Looked up iptables command backendMode="legacy" candidates=[]string{"iptables-legacy-restore", "iptables-restore"} command="iptables-legacy-restore" ipVersion=0x4 saveOrRestore="restore"
2021-01-12 09:12:56.464 [INFO][60] felix/feature_detect.go 259: Looked up iptables command backendMode="legacy" candidates=[]string{"iptables-legacy-save", "iptables-save"} command="iptables-legacy-save" ipVersion=0x4 saveOrRestore="save"
2021-01-12 09:12:56.464 [INFO][60] felix/table.go 329: Calculated old-insert detection regex. pattern="(?:-j|--jump) cali-|(?:-j|--jump) califw-|(?:-j|--jump) calitw-|(?:-j|--jump) califh-|(?:-j|--jump) calith-|(?:-j|--jump) calipi-|(?:-j|--jump) calipo-|(?:-j|--jump) felix-"
2021-01-12 09:12:56.464 [INFO][60] felix/feature_detect.go 259: Looked up iptables command backendMode="legacy" candidates=[]string{"iptables-legacy-restore", "iptables-restore"} command="iptables-legacy-restore" ipVersion=0x4 saveOrRestore="restore"
2021-01-12 09:12:56.464 [INFO][60] felix/feature_detect.go 259: Looked up iptables command backendMode="legacy" candidates=[]string{"iptables-legacy-save", "iptables-save"} command="iptables-legacy-save" ipVersion=0x4 saveOrRestore="save"
2021-01-12 09:12:56.464 [INFO][60] felix/route_table.go 241: Calculated interface name regexp regex="^vxlan.calico$"
2021-01-12 09:12:56.464 [INFO][60] felix/vxlan_mgr.go 340: VXLAN tunnel device thread started. mtu=1450
2021-01-12 09:12:56.464 [WARNING][60] felix/int_dataplane.go 448: Can't enable XDP acceleration. error=/sys/fs/bpf is not mounted
2021-01-12 09:12:56.465 [INFO][60] felix/connecttime.go 46: Running bpftool to look up programs attached to cgroup args=[]string{"bpftool", "-j", "-p", "cgroup", "show", "/run/calico/cgroup"}
2021-01-12 09:12:56.467 [INFO][60] felix/connecttime.go 49: Failed to list BPF programs.  Assuming not supported/nothing to clean up. error=exit status 255 output="[]\n"
2021-01-12 09:12:56.467 [INFO][60] felix/int_dataplane.go 517: Failed to remove BPF connect-time load balancer, ignoring. error=exit status 255
2021-01-12 09:12:56.468 [INFO][60] felix/cleanup.go 37: Failed to list BPF maps, assuming there's nothing to clean up. error=exit status 255
2021-01-12 09:12:56.468 [INFO][60] felix/route_table.go 241: Calculated interface name regexp regex="^cali.*"
2021-01-12 09:12:56.468 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="all-ipam-pools" setType="hash:net"
2021-01-12 09:12:56.468 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="masq-ipam-pools" setType="hash:net"
2021-01-12 09:12:56.468 [INFO][60] felix/route_table.go 241: Calculated interface name regexp regex="^wireguard.cali$"
2021-01-12 09:12:56.468 [INFO][60] felix/int_dataplane.go 772: Registering to report health.
2021-01-12 09:12:56.470 [INFO][60] felix/int_dataplane.go 1500: attempted to modprobe nf_conntrack_proto_sctp error=exit status 1 output=""
2021-01-12 09:12:56.470 [INFO][60] felix/int_dataplane.go 1502: Making sure IPv4 forwarding is enabled.
2021-01-12 09:12:56.472 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-failsafe-in" ipVersion=0x4 table="raw"
2021-01-12 09:12:56.472 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-failsafe-out" ipVersion=0x4 table="raw"
2021-01-12 09:12:56.472 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-PREROUTING" ipVersion=0x4 table="raw"
2021-01-12 09:12:56.472 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-from-host-endpoint"
2021-01-12 09:12:56.473 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-OUTPUT" ipVersion=0x4 table="raw"
2021-01-12 09:12:56.473 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-to-host-endpoint"
2021-01-12 09:12:56.473 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-PREROUTING"
2021-01-12 09:12:56.473 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-OUTPUT"
2021-01-12 09:12:56.473 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-FORWARD" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.473 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-from-hep-forward"
2021-01-12 09:12:56.473 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-from-wl-dispatch"
2021-01-12 09:12:56.473 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-to-wl-dispatch"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-to-hep-forward"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-cidr-block"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-INPUT" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-wl-to-host"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-from-host-endpoint"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-wl-to-host" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-failsafe-in" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-OUTPUT" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-to-host-endpoint"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-failsafe-out" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-FORWARD"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-INPUT"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-OUTPUT"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-PREROUTING" ipVersion=0x4 table="nat"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-fip-dnat"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-POSTROUTING" ipVersion=0x4 table="nat"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-fip-snat"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-nat-outgoing"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-OUTPUT" ipVersion=0x4 table="nat"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-PREROUTING"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-POSTROUTING"
2021-01-12 09:12:56.474 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-OUTPUT"
2021-01-12 09:12:56.475 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-failsafe-in" ipVersion=0x4 table="mangle"
2021-01-12 09:12:56.475 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-PREROUTING" ipVersion=0x4 table="mangle"
2021-01-12 09:12:56.475 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-from-host-endpoint"
2021-01-12 09:12:56.475 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-PREROUTING"
2021-01-12 09:12:56.475 [INFO][60] felix/int_dataplane.go 1033: IPIP disabled. Not starting tunnel update thread.
2021-01-12 09:12:56.475 [INFO][60] felix/int_dataplane.go 1279: Started internal iptables dataplane driver loop
2021-01-12 09:12:56.475 [INFO][60] felix/daemon.go 385: Connect to the dataplane driver.
2021-01-12 09:12:56.475 [INFO][60] felix/iface_monitor.go 97: Interface monitoring thread started.
2021-01-12 09:12:56.475 [INFO][60] felix/int_dataplane.go 1289: Will refresh IP sets on timer interval=1m30s
2021-01-12 09:12:56.475 [INFO][60] felix/int_dataplane.go 1715: Started internal status report thread
2021-01-12 09:12:56.475 [INFO][60] felix/iface_monitor.go 107: Subscribed to netlink updates.
2021-01-12 09:12:56.475 [INFO][60] felix/int_dataplane.go 1299: Will refresh routes on timer interval=1m30s
2021-01-12 09:12:56.475 [INFO][60] felix/int_dataplane.go 1717: Process status reports disabled
2021-01-12 09:12:56.475 [INFO][60] felix/daemon.go 441: Connecting to Typha. addr="172.17.0.32:5473"
2021-01-12 09:12:56.475 [INFO][60] felix/sync_client.go 71:  requiringTLS=true
2021-01-12 09:12:56.475 [INFO][60] felix/daemon.go 466: Created Syncer syncer=<nil>
2021-01-12 09:12:56.475 [INFO][60] felix/daemon.go 473: Starting the Typha connection
2021-01-12 09:12:56.475 [INFO][60] felix/sync_client.go 200: Starting Typha client
2021-01-12 09:12:56.475 [INFO][60] felix/sync_client.go 71:  requiringTLS=true
2021-01-12 09:12:56.476 [INFO][60] felix/tlsutils.go 39: Make certificate verifier requiredCN="typha-server" requiredURISAN="" roots=&x509.CertPool{bySubjectKeyId:map[string][]int{"\xf7\x8bVN\x8f\xae\x8dQ\xe9ˤ\xa8y\x17\xf6\xe4\x1cA34":[]int{0}}, byName:map[string][]int{"0,1*0(\x06\x03U\x04\x03\f!tigera-operator-signer@1610442752":[]int{0}}, certs:[]*x509.Certificate{(*x509.Certificate)(0xc0007d0680)}}
2021-01-12 09:12:56.476 [INFO][60] felix/sync_client.go 251: Connecting to Typha. address="172.17.0.32:5473" connID=0x0 type=""
2021-01-12 09:12:56.478 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=1 ifaceName="lo" state="up"
2021-01-12 09:12:56.478 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"lo", State:"up", Index:1}
2021-01-12 09:12:56.480 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"127.0.0.0":set.empty{}, "127.0.0.1":set.empty{}} ifaceName="lo"
2021-01-12 09:12:56.481 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=2 ifaceName="eth0" state="up"
2021-01-12 09:12:56.481 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"lo", Addrs:set.mapSet{"127.0.0.0":set.empty{}, "127.0.0.1":set.empty{}}}
2021-01-12 09:12:56.481 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"lo", Addrs:set.mapSet{"127.0.0.0":set.empty{}, "127.0.0.1":set.empty{}}}
2021-01-12 09:12:56.481 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.481 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth0", State:"up", Index:2}
2021-01-12 09:12:56.482 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth0"
2021-01-12 09:12:56.482 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=3 ifaceName="eth1" state="up"
2021-01-12 09:12:56.482 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth0", Addrs:set.mapSet{}}
2021-01-12 09:12:56.482 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth0", Addrs:set.mapSet{}}
2021-01-12 09:12:56.482 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.482 [INFO][60] felix/tlsutils.go 46: Verify certificate chain signing address="172.17.0.32:5473" connID=0x0 type=""
2021-01-12 09:12:56.482 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1", State:"up", Index:3}
2021-01-12 09:12:56.483 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth1"
2021-01-12 09:12:56.483 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=4 ifaceName="bond0" state="up"
2021-01-12 09:12:56.484 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{}}
2021-01-12 09:12:56.484 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{}}
2021-01-12 09:12:56.484 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.485 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"bond0", State:"up", Index:4}
2021-01-12 09:12:56.485 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.17.0.16":set.empty{}, "172.17.0.20":set.empty{}, "172.17.0.32":set.empty{}} ifaceName="bond0"
2021-01-12 09:12:56.485 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=5 ifaceName="brGXData" state="up"
2021-01-12 09:12:56.486 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"bond0", Addrs:set.mapSet{"172.17.0.16":set.empty{}, "172.17.0.20":set.empty{}, "172.17.0.32":set.empty{}}}
2021-01-12 09:12:56.486 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"bond0", Addrs:set.mapSet{"172.17.0.16":set.empty{}, "172.17.0.20":set.empty{}, "172.17.0.32":set.empty{}}}
2021-01-12 09:12:56.486 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.486 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"brGXData", State:"up", Index:5}
2021-01-12 09:12:56.487 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="brGXData"
2021-01-12 09:12:56.487 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=6 ifaceName="eth1.31" state="up"
2021-01-12 09:12:56.488 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"brGXData", Addrs:set.mapSet{}}
2021-01-12 09:12:56.488 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"brGXData", Addrs:set.mapSet{}}
2021-01-12 09:12:56.488 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.488 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.31", State:"up", Index:6}
2021-01-12 09:12:56.488 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth1.31"
2021-01-12 09:12:56.488 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=7 ifaceName="brGXMgmt" state="up"
2021-01-12 09:12:56.488 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.31", Addrs:set.mapSet{}}
2021-01-12 09:12:56.488 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.31", Addrs:set.mapSet{}}
2021-01-12 09:12:56.489 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.489 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"brGXMgmt", State:"up", Index:7}
2021-01-12 09:12:56.489 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.29.55.10":set.empty{}} ifaceName="brGXMgmt"
2021-01-12 09:12:56.490 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=8 ifaceName="eth1.400" state="up"
2021-01-12 09:12:56.490 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"brGXMgmt", Addrs:set.mapSet{"172.29.55.10":set.empty{}}}
2021-01-12 09:12:56.490 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"brGXMgmt", Addrs:set.mapSet{"172.29.55.10":set.empty{}}}
2021-01-12 09:12:56.490 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.490 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.400", State:"up", Index:8}
2021-01-12 09:12:56.491 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth1.400"
2021-01-12 09:12:56.491 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=9 ifaceName="br3_400" state="up"
2021-01-12 09:12:56.492 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.400", Addrs:set.mapSet{}}
2021-01-12 09:12:56.492 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.400", Addrs:set.mapSet{}}
2021-01-12 09:12:56.493 [INFO][60] felix/sync_client.go 266: Connected to Typha. address="172.17.0.32:5473" connID=0x0 type=""
2021-01-12 09:12:56.493 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.493 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"br3_400", State:"up", Index:9}
2021-01-12 09:12:56.493 [INFO][60] felix/sync_client.go 300: Started Typha client main loop address="172.17.0.32:5473" connID=0x0 type=""
2021-01-12 09:12:56.493 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.27.0.11":set.empty{}} ifaceName="br3_400"
2021-01-12 09:12:56.493 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=10 ifaceName="eth1.4086" state="up"
2021-01-12 09:12:56.493 [INFO][60] felix/sync_client.go 357: Server hello message received address="172.17.0.32:5473" connID=0x0 serverVersion="v3.17.1" type=""
2021-01-12 09:12:56.493 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"br3_400", Addrs:set.mapSet{"172.27.0.11":set.empty{}}}
2021-01-12 09:12:56.494 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"br3_400", Addrs:set.mapSet{"172.27.0.11":set.empty{}}}
2021-01-12 09:12:56.494 [INFO][60] felix/calc_graph.go 114: Creating calculation graph, filtered to hostname nc1
2021-01-12 09:12:56.494 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.494 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.WorkloadEndpointKey: (dispatcher.UpdateHandler)(0x1761f20)
2021-01-12 09:12:56.494 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.HostEndpointKey: (dispatcher.UpdateHandler)(0x1761f20)
2021-01-12 09:12:56.494 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.4086", State:"up", Index:10}
2021-01-12 09:12:56.494 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.WorkloadEndpointKey: (dispatcher.UpdateHandler)(0x1762020)
2021-01-12 09:12:56.494 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.HostEndpointKey: (dispatcher.UpdateHandler)(0x1762020)
2021-01-12 09:12:56.495 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.WorkloadEndpointKey: (dispatcher.UpdateHandler)(0x1761dc0)
2021-01-12 09:12:56.495 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.HostEndpointKey: (dispatcher.UpdateHandler)(0x1761dc0)
2021-01-12 09:12:56.495 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.PolicyKey: (dispatcher.UpdateHandler)(0x1761dc0)
2021-01-12 09:12:56.495 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.ProfileRulesKey: (dispatcher.UpdateHandler)(0x1761dc0)
2021-01-12 09:12:56.495 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.ProfileLabelsKey: (dispatcher.UpdateHandler)(0x1761dc0)
2021-01-12 09:12:56.495 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth1.4086"
2021-01-12 09:12:56.495 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.ProfileTagsKey: (dispatcher.UpdateHandler)(0x1761dc0)
2021-01-12 09:12:56.495 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=11 ifaceName="br3_4086" state="up"
2021-01-12 09:12:56.495 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.ProfileTagsKey: (dispatcher.UpdateHandler)(0x16a1fe0)
2021-01-12 09:12:56.495 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.ProfileLabelsKey: (dispatcher.UpdateHandler)(0x16a1fe0)
2021-01-12 09:12:56.496 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.WorkloadEndpointKey: (dispatcher.UpdateHandler)(0x16a1fe0)
2021-01-12 09:12:56.496 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.HostEndpointKey: (dispatcher.UpdateHandler)(0x16a1fe0)
2021-01-12 09:12:56.496 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.NetworkSetKey: (dispatcher.UpdateHandler)(0x16a1fe0)
2021-01-12 09:12:56.496 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.PolicyKey: (dispatcher.UpdateHandler)(0x17625e0)
2021-01-12 09:12:56.496 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.WorkloadEndpointKey: (dispatcher.UpdateHandler)(0x17625e0)
2021-01-12 09:12:56.496 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.HostEndpointKey: (dispatcher.UpdateHandler)(0x17625e0)
2021-01-12 09:12:56.496 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.HostIPKey: (dispatcher.UpdateHandler)(0x17621c0)
2021-01-12 09:12:56.496 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.IPPoolKey: (dispatcher.UpdateHandler)(0x17621c0)
2021-01-12 09:12:56.496 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.WireguardKey: (dispatcher.UpdateHandler)(0x17621c0)
2021-01-12 09:12:56.497 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.ResourceKey: (dispatcher.UpdateHandler)(0x17621c0)
2021-01-12 09:12:56.497 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.19.241.17":set.empty{}} ifaceName="br3_4086"
2021-01-12 09:12:56.497 [INFO][60] felix/l3_route_resolver.go 142: Creating L3 route resolver
2021-01-12 09:12:56.497 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=12 ifaceName="eth1.4087" state="up"
2021-01-12 09:12:56.497 [INFO][60] felix/l3_route_resolver.go 161: Registering L3 route resolver (node resources on)
2021-01-12 09:12:56.497 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4086", Addrs:set.mapSet{}}
2021-01-12 09:12:56.497 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4086", Addrs:set.mapSet{}}
2021-01-12 09:12:56.497 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.ResourceKey: (dispatcher.UpdateHandler)(0x17622c0)
2021-01-12 09:12:56.497 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.IPPoolKey: (dispatcher.UpdateHandler)(0x1762400)
2021-01-12 09:12:56.497 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.497 [INFO][60] felix/l3_route_resolver.go 172: Registering for L3 route updates routeSource="CalicoIPAM"
2021-01-12 09:12:56.498 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.BlockKey: (dispatcher.UpdateHandler)(0x1762540)
2021-01-12 09:12:56.498 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.WorkloadEndpointKey: (dispatcher.UpdateHandler)(0x17624a0)
2021-01-12 09:12:56.497 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"br3_4086", Addrs:set.mapSet{"172.19.241.17":set.empty{}}}
2021-01-12 09:12:56.498 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.ResourceKey: (dispatcher.UpdateHandler)(0x1762960)
2021-01-12 09:12:56.498 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"br3_4086", Addrs:set.mapSet{"172.19.241.17":set.empty{}}}
2021-01-12 09:12:56.498 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.498 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.HostConfigKey: (dispatcher.UpdateHandler)(0x1762aa0)
2021-01-12 09:12:56.498 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"br3_4086", State:"up", Index:11}
2021-01-12 09:12:56.498 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.GlobalConfigKey: (dispatcher.UpdateHandler)(0x17620c0)
2021-01-12 09:12:56.498 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.4087", State:"up", Index:12}
2021-01-12 09:12:56.498 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.HostConfigKey: (dispatcher.UpdateHandler)(0x17620c0)
2021-01-12 09:12:56.499 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.ReadyFlagKey: (dispatcher.UpdateHandler)(0x17620c0)
2021-01-12 09:12:56.499 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth1.4087"
2021-01-12 09:12:56.499 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.ProfileLabelsKey: (dispatcher.UpdateHandler)(0x1762760)
2021-01-12 09:12:56.499 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=13 ifaceName="br3_4087" state="up"
2021-01-12 09:12:56.499 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.HostIPKey: (dispatcher.UpdateHandler)(0x1762800)
2021-01-12 09:12:56.499 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.WorkloadEndpointKey: (dispatcher.UpdateHandler)(0x1762800)
2021-01-12 09:12:56.499 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.HostEndpointKey: (dispatcher.UpdateHandler)(0x1762800)
2021-01-12 09:12:56.499 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4087", Addrs:set.mapSet{}}
2021-01-12 09:12:56.500 [INFO][60] felix/dispatcher.go 68: Registering listener for type model.HostConfigKey: (dispatcher.UpdateHandler)(0x1762800)
2021-01-12 09:12:56.500 [INFO][60] felix/async_calc_graph.go 255: Starting AsyncCalcGraph
2021-01-12 09:12:56.500 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4087", Addrs:set.mapSet{}}
2021-01-12 09:12:56.500 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.500 [INFO][60] felix/daemon.go 577: Started the processing graph
2021-01-12 09:12:56.500 [INFO][60] felix/async_calc_graph.go 137: AsyncCalcGraph running
2021-01-12 09:12:56.500 [INFO][60] felix/daemon.go 955: Reading from dataplane driver pipe...
2021-01-12 09:12:56.500 [INFO][60] felix/daemon.go 674: No driver process to monitor
2021-01-12 09:12:56.500 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.19.241.1":set.empty{}} ifaceName="br3_4087"
2021-01-12 09:12:56.500 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ConfigUpdate update from calculation graph msg=config:<key:"CalicoVersion" value:"v3.17.1" > config:<key:"ClusterGUID" value:"6c70b42e3f974469ac2efe1bfa298127" > config:<key:"ClusterType" value:"typha,kdd,k8s,operator,bgp" > config:<key:"DatastoreType" value:"kubernetes" > config:<key:"DefaultEndpointToHostAction" value:"ACCEPT" > config:<key:"FelixHostname" value:"nc1" > config:<key:"HealthEnabled" value:"true" > config:<key:"IPv4VXLANTunnelAddr" value:"20.28.116.64" > config:<key:"IptablesBackend" value:"auto" > config:<key:"Ipv6Support" value:"false" > config:<key:"LogFilePath" value:"None" > config:<key:"LogSeverityFile" value:"None" > config:<key:"LogSeverityScreen" value:"Info" > config:<key:"LogSeveritySys" value:"None" > config:<key:"MetadataAddr" value:"None" > config:<key:"ReportingIntervalSecs" value:"0" > config:<key:"TyphaCAFile" value:"/typha-ca/caBundle" > config:<key:"TyphaCN" value:"typha-server" > config:<key:"TyphaCertFile" value:"/felix-certs/cert.crt" > config:<key:"TyphaK8sNamespace" value:"calico-system" > config:<key:"TyphaK8sServiceName" value:"calico-typha" > config:<key:"TyphaKeyFile" value:"/felix-certs/key.key" > config:<key:"VXLANEnabled" value:"true" >
2021-01-12 09:12:56.501 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=14 ifaceName="eth1.4088" state="up"
2021-01-12 09:12:56.501 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"br3_4087", State:"up", Index:13}
2021-01-12 09:12:56.501 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.4088", State:"up", Index:14}
2021-01-12 09:12:56.501 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"br3_4087", Addrs:set.mapSet{"172.19.241.1":set.empty{}}}
2021-01-12 09:12:56.501 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"br3_4087", Addrs:set.mapSet{"172.19.241.1":set.empty{}}}
2021-01-12 09:12:56.502 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.502 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth1.4088"
2021-01-12 09:12:56.502 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=15 ifaceName="br3_4088" state="up"
2021-01-12 09:12:56.503 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4088", Addrs:set.mapSet{}}
2021-01-12 09:12:56.504 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4088", Addrs:set.mapSet{}}
2021-01-12 09:12:56.504 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.504 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"br3_4088", State:"up", Index:15}
2021-01-12 09:12:56.505 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.19.254.1":set.empty{}} ifaceName="br3_4088"
2021-01-12 09:12:56.505 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=16 ifaceName="eth1.4089" state="up"
2021-01-12 09:12:56.505 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"br3_4088", Addrs:set.mapSet{"172.19.254.1":set.empty{}}}
2021-01-12 09:12:56.505 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"br3_4088", Addrs:set.mapSet{"172.19.254.1":set.empty{}}}
2021-01-12 09:12:56.505 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.505 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.4089", State:"up", Index:16}
2021-01-12 09:12:56.506 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth1.4089"
2021-01-12 09:12:56.506 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=17 ifaceName="br3_4089" state="up"
2021-01-12 09:12:56.506 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4089", Addrs:set.mapSet{}}
2021-01-12 09:12:56.506 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4089", Addrs:set.mapSet{}}
2021-01-12 09:12:56.506 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.506 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"br3_4089", State:"up", Index:17}
2021-01-12 09:12:56.508 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.19.253.1":set.empty{}} ifaceName="br3_4089"
2021-01-12 09:12:56.508 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=18 ifaceName="eth1.4090" state="up"
2021-01-12 09:12:56.509 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"br3_4089", Addrs:set.mapSet{"172.19.253.1":set.empty{}}}
2021-01-12 09:12:56.509 [INFO][60] felix/config_batcher.go 74: Global config update: {{GlobalFelixConfig(name=LogSeverityScreen) Info 1140 <nil> 0s} 1}
2021-01-12 09:12:56.509 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"br3_4089", Addrs:set.mapSet{"172.19.253.1":set.empty{}}}
2021-01-12 09:12:56.509 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth1.4090"
2021-01-12 09:12:56.510 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.510 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=19 ifaceName="br3_4090" state="up"
2021-01-12 09:12:56.510 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4090", Addrs:set.mapSet{}}
2021-01-12 09:12:56.510 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4090", Addrs:set.mapSet{}}
2021-01-12 09:12:56.510 [INFO][60] felix/config_batcher.go 74: Global config update: {{GlobalFelixConfig(name=ClusterGUID) 6c70b42e3f974469ac2efe1bfa298127 1099 <nil> 0s} 1}
2021-01-12 09:12:56.510 [INFO][60] felix/config_batcher.go 74: Global config update: {{GlobalFelixConfig(name=ClusterType) typha,kdd,k8s,operator,bgp 1139 <nil> 0s} 1}
2021-01-12 09:12:56.510 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.510 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.4090", State:"up", Index:18}
2021-01-12 09:12:56.511 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"br3_4090", State:"up", Index:19}
2021-01-12 09:12:56.511 [INFO][60] felix/config_batcher.go 74: Global config update: {{GlobalFelixConfig(name=CalicoVersion) v3.17.1 1139 <nil> 0s} 1}
2021-01-12 09:12:56.511 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.19.240.1":set.empty{}} ifaceName="br3_4090"
2021-01-12 09:12:56.512 [INFO][60] felix/sync_client.go 328: Status update from Typha. address="172.17.0.32:5473" connID=0x0 newStatus=in-sync type=""
2021-01-12 09:12:56.512 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=20 ifaceName="eth1.4091" state="up"
2021-01-12 09:12:56.512 [INFO][60] felix/config_batcher.go 74: Global config update: {{GlobalFelixConfig(name=VXLANEnabled) true 1137 <nil> 0s} 1}
2021-01-12 09:12:56.512 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"br3_4090", Addrs:set.mapSet{"172.19.240.1":set.empty{}}}
2021-01-12 09:12:56.512 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"br3_4090", Addrs:set.mapSet{"172.19.240.1":set.empty{}}}
2021-01-12 09:12:56.513 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.513 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.4091", State:"up", Index:20}
2021-01-12 09:12:56.513 [INFO][60] felix/vxlan_resolver.go 247: Missing vxlan tunnel address for node, cannot send VTEP yet node="nc1"
2021-01-12 09:12:56.513 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth1.4091"
2021-01-12 09:12:56.513 [INFO][60] felix/l3_route_resolver.go 532: Pool is active newType=VXLAN oldType=NONE
2021-01-12 09:12:56.513 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=21 ifaceName="br3_4091" state="up"
2021-01-12 09:12:56.513 [INFO][60] felix/config_batcher.go 74: Global config update: {{GlobalFelixConfig(name=ReportingIntervalSecs) 0 1140 <nil> 0s} 1}
2021-01-12 09:12:56.513 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4091", Addrs:set.mapSet{}}
2021-01-12 09:12:56.513 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4091", Addrs:set.mapSet{}}
2021-01-12 09:12:56.515 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.515 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.19.252.1":set.empty{}} ifaceName="br3_4091"
2021-01-12 09:12:56.515 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=22 ifaceName="eth1.4092" state="up"
2021-01-12 09:12:56.515 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"br3_4091", State:"up", Index:21}
2021-01-12 09:12:56.515 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.4092", State:"up", Index:22}
2021-01-12 09:12:56.515 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"br3_4091", Addrs:set.mapSet{"172.19.252.1":set.empty{}}}
2021-01-12 09:12:56.515 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"br3_4091", Addrs:set.mapSet{"172.19.252.1":set.empty{}}}
2021-01-12 09:12:56.513 [INFO][60] felix/config_batcher.go 61: Host config update for this host: {{HostConfig(node=nc1,name=IPv4VXLANTunnelAddr) 20.28.116.64 1147 <nil> 0s} 1}
2021-01-12 09:12:56.516 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.516 [INFO][60] felix/calc_graph.go 413: Local endpoint updated id=WorkloadEndpoint(node=nc1, orchestrator=k8s, workload=kube-system/metrics-server-7566d596c8-nsxk6, name=eth0)
2021-01-12 09:12:56.516 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth1.4092"
2021-01-12 09:12:56.517 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4092", Addrs:set.mapSet{}}
2021-01-12 09:12:56.517 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=23 ifaceName="br3_4092" state="up"
2021-01-12 09:12:56.517 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4092", Addrs:set.mapSet{}}
2021-01-12 09:12:56.517 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.518 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"br3_4092", State:"up", Index:23}
2021-01-12 09:12:56.518 [INFO][60] felix/config_batcher.go 102: Datamodel in sync, flushing config update
2021-01-12 09:12:56.518 [INFO][60] felix/config_batcher.go 112: Sending config update global: map[CalicoVersion:v3.17.1 ClusterGUID:6c70b42e3f974469ac2efe1bfa298127 ClusterType:typha,kdd,k8s,operator,bgp LogSeverityScreen:Info ReportingIntervalSecs:0 VXLANEnabled:true], host: map[IPv4VXLANTunnelAddr:20.28.116.64].
2021-01-12 09:12:56.518 [INFO][60] felix/usagerep.go 91: Waiting before first check-in delay=5m0.023s
2021-01-12 09:12:56.518 [INFO][60] felix/async_calc_graph.go 166: First time we've been in sync
2021-01-12 09:12:56.518 [INFO][60] felix/health.go 133: Health of component changed lastReport=health.HealthReport{Live:true, Ready:false} name="async_calc_graph" newReport=&health.HealthReport{Live:true, Ready:true}
2021-01-12 09:12:56.518 [INFO][60] felix/event_sequencer.go 234: Possible config update. global=map[string]string{"CalicoVersion":"v3.17.1", "ClusterGUID":"6c70b42e3f974469ac2efe1bfa298127", "ClusterType":"typha,kdd,k8s,operator,bgp", "LogSeverityScreen":"Info", "ReportingIntervalSecs":"0", "VXLANEnabled":"true"} host=map[string]string{"IPv4VXLANTunnelAddr":"20.28.116.64"}
2021-01-12 09:12:56.519 [INFO][60] felix/config_params.go 341: Merging in config from datastore (global): map[CalicoVersion:v3.17.1 ClusterGUID:6c70b42e3f974469ac2efe1bfa298127 ClusterType:typha,kdd,k8s,operator,bgp LogSeverityScreen:Info ReportingIntervalSecs:0 VXLANEnabled:true]
2021-01-12 09:12:56.519 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.19.248.1":set.empty{}} ifaceName="br3_4092"
2021-01-12 09:12:56.519 [INFO][60] felix/config_params.go 430: Parsing value for TyphaCertFile: /felix-certs/cert.crt (from environment variable)
2021-01-12 09:12:56.519 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=24 ifaceName="eth1.4093" state="up"
2021-01-12 09:12:56.519 [INFO][60] felix/param_types.go 279: Looking for required file path="/felix-certs/cert.crt"
2021-01-12 09:12:56.519 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"br3_4092", Addrs:set.mapSet{"172.19.248.1":set.empty{}}}
2021-01-12 09:12:56.519 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"br3_4092", Addrs:set.mapSet{"172.19.248.1":set.empty{}}}
2021-01-12 09:12:56.519 [INFO][60] felix/config_params.go 466: Parsed value for TyphaCertFile: /felix-certs/cert.crt (from environment variable)
2021-01-12 09:12:56.519 [INFO][60] felix/config_params.go 430: Parsing value for DatastoreType: kubernetes (from environment variable)
2021-01-12 09:12:56.519 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.519 [INFO][60] felix/config_params.go 466: Parsed value for DatastoreType: kubernetes (from environment variable)
2021-01-12 09:12:56.520 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.4093", State:"up", Index:24}
2021-01-12 09:12:56.520 [INFO][60] felix/config_params.go 430: Parsing value for IptablesBackend: auto (from environment variable)
2021-01-12 09:12:56.520 [INFO][60] felix/config_params.go 466: Parsed value for IptablesBackend: auto (from environment variable)
2021-01-12 09:12:56.520 [INFO][60] felix/config_params.go 430: Parsing value for TyphaKeyFile: /felix-certs/key.key (from environment variable)
2021-01-12 09:12:56.520 [INFO][60] felix/param_types.go 279: Looking for required file path="/felix-certs/key.key"
2021-01-12 09:12:56.520 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth1.4093"
2021-01-12 09:12:56.520 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=25 ifaceName="br3_4093" state="up"
2021-01-12 09:12:56.520 [INFO][60] felix/config_params.go 466: Parsed value for TyphaKeyFile: /felix-certs/key.key (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/config_params.go 430: Parsing value for DefaultEndpointToHostAction: ACCEPT (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4093", Addrs:set.mapSet{}}
2021-01-12 09:12:56.521 [INFO][60] felix/config_params.go 466: Parsed value for DefaultEndpointToHostAction: ACCEPT (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4093", Addrs:set.mapSet{}}
2021-01-12 09:12:56.521 [INFO][60] felix/config_params.go 430: Parsing value for HealthEnabled: true (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/config_params.go 466: Parsed value for HealthEnabled: true (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.521 [INFO][60] felix/config_params.go 430: Parsing value for TyphaK8sServiceName: calico-typha (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"br3_4093", State:"up", Index:25}
2021-01-12 09:12:56.521 [INFO][60] felix/config_params.go 466: Parsed value for TyphaK8sServiceName: calico-typha (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/config_params.go 430: Parsing value for TyphaCN: typha-server (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/config_params.go 466: Parsed value for TyphaCN: typha-server (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.19.255.1":set.empty{}} ifaceName="br3_4093"
2021-01-12 09:12:56.521 [INFO][60] felix/config_params.go 430: Parsing value for Ipv6Support: false (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=26 ifaceName="eth1.4094" state="up"
2021-01-12 09:12:56.521 [INFO][60] felix/config_params.go 466: Parsed value for Ipv6Support: false (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/config_params.go 430: Parsing value for FelixHostname: nc1 (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/config_params.go 466: Parsed value for FelixHostname: nc1 (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/config_params.go 430: Parsing value for TyphaCAFile: /typha-ca/caBundle (from environment variable)
2021-01-12 09:12:56.521 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"br3_4093", Addrs:set.mapSet{"172.19.255.1":set.empty{}}}
2021-01-12 09:12:56.522 [INFO][60] felix/param_types.go 279: Looking for required file path="/typha-ca/caBundle"
2021-01-12 09:12:56.522 [INFO][60] felix/config_params.go 466: Parsed value for TyphaCAFile: /typha-ca/caBundle (from environment variable)
2021-01-12 09:12:56.522 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"br3_4093", Addrs:set.mapSet{"172.19.255.1":set.empty{}}}
2021-01-12 09:12:56.522 [INFO][60] felix/config_params.go 430: Parsing value for TyphaK8sNamespace: calico-system (from environment variable)
2021-01-12 09:12:56.522 [INFO][60] felix/config_params.go 466: Parsed value for TyphaK8sNamespace: calico-system (from environment variable)
2021-01-12 09:12:56.522 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.522 [INFO][60] felix/config_params.go 430: Parsing value for LogSeveritySys: None (from config file)
2021-01-12 09:12:56.522 [INFO][60] felix/config_params.go 447: Value set to 'none', replacing with zero-value: "".
2021-01-12 09:12:56.522 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.4094", State:"up", Index:26}
2021-01-12 09:12:56.523 [INFO][60] felix/config_params.go 466: Parsed value for LogSeveritySys:  (from config file)
2021-01-12 09:12:56.523 [INFO][60] felix/config_params.go 430: Parsing value for MetadataAddr: None (from config file)
2021-01-12 09:12:56.523 [INFO][60] felix/config_params.go 447: Value set to 'none', replacing with zero-value: "".
2021-01-12 09:12:56.523 [INFO][60] felix/config_params.go 466: Parsed value for MetadataAddr:  (from config file)
2021-01-12 09:12:56.523 [INFO][60] felix/config_params.go 430: Parsing value for LogFilePath: None (from config file)
2021-01-12 09:12:56.523 [INFO][60] felix/config_params.go 447: Value set to 'none', replacing with zero-value: "".
2021-01-12 09:12:56.523 [INFO][60] felix/config_params.go 466: Parsed value for LogFilePath:  (from config file)
2021-01-12 09:12:56.523 [INFO][60] felix/config_params.go 430: Parsing value for LogSeverityFile: None (from config file)
2021-01-12 09:12:56.523 [INFO][60] felix/config_params.go 447: Value set to 'none', replacing with zero-value: "".
2021-01-12 09:12:56.523 [INFO][60] felix/config_params.go 466: Parsed value for LogSeverityFile:  (from config file)
2021-01-12 09:12:56.524 [INFO][60] felix/config_params.go 430: Parsing value for IPv4VXLANTunnelAddr: 20.28.116.64 (from datastore (per-host))
2021-01-12 09:12:56.524 [INFO][60] felix/config_params.go 466: Parsed value for IPv4VXLANTunnelAddr: 20.28.116.64 (from datastore (per-host))
2021-01-12 09:12:56.524 [INFO][60] felix/config_params.go 430: Parsing value for CalicoVersion: v3.17.1 (from datastore (global))
2021-01-12 09:12:56.524 [INFO][60] felix/config_params.go 466: Parsed value for CalicoVersion: v3.17.1 (from datastore (global))
2021-01-12 09:12:56.524 [INFO][60] felix/config_params.go 430: Parsing value for VXLANEnabled: true (from datastore (global))
2021-01-12 09:12:56.524 [INFO][60] felix/config_params.go 466: Parsed value for VXLANEnabled: true (from datastore (global))
2021-01-12 09:12:56.524 [INFO][60] felix/config_params.go 430: Parsing value for ReportingIntervalSecs: 0 (from datastore (global))
2021-01-12 09:12:56.524 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.29.15.155":set.empty{}, "172.29.15.158":set.empty{}} ifaceName="eth1.4094"
2021-01-12 09:12:56.524 [INFO][60] felix/config_params.go 466: Parsed value for ReportingIntervalSecs: 0s (from datastore (global))
2021-01-12 09:12:56.525 [INFO][60] felix/config_params.go 430: Parsing value for LogSeverityScreen: Info (from datastore (global))
2021-01-12 09:12:56.525 [INFO][60] felix/config_params.go 466: Parsed value for LogSeverityScreen: INFO (from datastore (global))
2021-01-12 09:12:56.525 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4094", Addrs:set.mapSet{"172.29.15.155":set.empty{}, "172.29.15.158":set.empty{}}}
2021-01-12 09:12:56.525 [INFO][60] felix/config_params.go 430: Parsing value for ClusterGUID: 6c70b42e3f974469ac2efe1bfa298127 (from datastore (global))
2021-01-12 09:12:56.525 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=27 ifaceName="brMgmt" state="up"
2021-01-12 09:12:56.525 [INFO][60] felix/config_params.go 466: Parsed value for ClusterGUID: 6c70b42e3f974469ac2efe1bfa298127 (from datastore (global))
2021-01-12 09:12:56.525 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.4094", Addrs:set.mapSet{"172.29.15.155":set.empty{}, "172.29.15.158":set.empty{}}}
2021-01-12 09:12:56.525 [INFO][60] felix/config_params.go 430: Parsing value for ClusterType: typha,kdd,k8s,operator,bgp (from datastore (global))
2021-01-12 09:12:56.525 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.525 [INFO][60] felix/config_params.go 466: Parsed value for ClusterType: typha,kdd,k8s,operator,bgp (from datastore (global))
2021-01-12 09:12:56.525 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"brMgmt", State:"up", Index:27}
2021-01-12 09:12:56.525 [INFO][60] felix/config_params.go 341: Merging in config from datastore (per-host): map[IPv4VXLANTunnelAddr:20.28.116.64]
2021-01-12 09:12:56.526 [INFO][60] felix/config_params.go 430: Parsing value for TyphaCN: typha-server (from environment variable)
2021-01-12 09:12:56.526 [INFO][60] felix/config_params.go 466: Parsed value for TyphaCN: typha-server (from environment variable)
2021-01-12 09:12:56.526 [INFO][60] felix/config_params.go 430: Parsing value for Ipv6Support: false (from environment variable)
2021-01-12 09:12:56.526 [INFO][60] felix/config_params.go 466: Parsed value for Ipv6Support: false (from environment variable)
2021-01-12 09:12:56.526 [INFO][60] felix/config_params.go 430: Parsing value for FelixHostname: nc1 (from environment variable)
2021-01-12 09:12:56.526 [INFO][60] felix/config_params.go 466: Parsed value for FelixHostname: nc1 (from environment variable)
2021-01-12 09:12:56.526 [INFO][60] felix/config_params.go 430: Parsing value for TyphaCAFile: /typha-ca/caBundle (from environment variable)
2021-01-12 09:12:56.526 [INFO][60] felix/param_types.go 279: Looking for required file path="/typha-ca/caBundle"
2021-01-12 09:12:56.526 [INFO][60] felix/config_params.go 466: Parsed value for TyphaCAFile: /typha-ca/caBundle (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 430: Parsing value for TyphaK8sNamespace: calico-system (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 466: Parsed value for TyphaK8sNamespace: calico-system (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 430: Parsing value for HealthEnabled: true (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.20.128.2":set.empty{}} ifaceName="brMgmt"
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 466: Parsed value for HealthEnabled: true (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 430: Parsing value for TyphaK8sServiceName: calico-typha (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=28 ifaceName="eth1.401" state="up"
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 466: Parsed value for TyphaK8sServiceName: calico-typha (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 430: Parsing value for DatastoreType: kubernetes (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 466: Parsed value for DatastoreType: kubernetes (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"brMgmt", Addrs:set.mapSet{"172.20.128.2":set.empty{}}}
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 430: Parsing value for IptablesBackend: auto (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"brMgmt", Addrs:set.mapSet{"172.20.128.2":set.empty{}}}
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 466: Parsed value for IptablesBackend: auto (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 430: Parsing value for TyphaKeyFile: /felix-certs/key.key (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.527 [INFO][60] felix/param_types.go 279: Looking for required file path="/felix-certs/key.key"
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 466: Parsed value for TyphaKeyFile: /felix-certs/key.key (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.401", State:"up", Index:28}
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 430: Parsing value for DefaultEndpointToHostAction: ACCEPT (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 466: Parsed value for DefaultEndpointToHostAction: ACCEPT (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 430: Parsing value for TyphaCertFile: /felix-certs/cert.crt (from environment variable)
2021-01-12 09:12:56.527 [INFO][60] felix/param_types.go 279: Looking for required file path="/felix-certs/cert.crt"
2021-01-12 09:12:56.527 [INFO][60] felix/config_params.go 466: Parsed value for TyphaCertFile: /felix-certs/cert.crt (from environment variable)
2021-01-12 09:12:56.528 [INFO][60] felix/config_params.go 430: Parsing value for LogSeveritySys: None (from config file)
2021-01-12 09:12:56.528 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.22.100.2":set.empty{}} ifaceName="eth1.401"
2021-01-12 09:12:56.528 [INFO][60] felix/config_params.go 447: Value set to 'none', replacing with zero-value: "".
2021-01-12 09:12:56.528 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=29 ifaceName="eth1.70" state="up"
2021-01-12 09:12:56.528 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.401", Addrs:set.mapSet{"172.22.100.2":set.empty{}}}
2021-01-12 09:12:56.528 [INFO][60] felix/config_params.go 466: Parsed value for LogSeveritySys:  (from config file)
2021-01-12 09:12:56.528 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.401", Addrs:set.mapSet{"172.22.100.2":set.empty{}}}
2021-01-12 09:12:56.528 [INFO][60] felix/config_params.go 430: Parsing value for MetadataAddr: None (from config file)
2021-01-12 09:12:56.528 [INFO][60] felix/config_params.go 447: Value set to 'none', replacing with zero-value: "".
2021-01-12 09:12:56.528 [INFO][60] felix/config_params.go 466: Parsed value for MetadataAddr:  (from config file)
2021-01-12 09:12:56.528 [INFO][60] felix/config_params.go 430: Parsing value for LogFilePath: None (from config file)
2021-01-12 09:12:56.528 [INFO][60] felix/config_params.go 447: Value set to 'none', replacing with zero-value: "".
2021-01-12 09:12:56.528 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.529 [INFO][60] felix/config_params.go 466: Parsed value for LogFilePath:  (from config file)
2021-01-12 09:12:56.529 [INFO][60] felix/config_params.go 430: Parsing value for LogSeverityFile: None (from config file)
2021-01-12 09:12:56.529 [INFO][60] felix/config_params.go 447: Value set to 'none', replacing with zero-value: "".
2021-01-12 09:12:56.529 [INFO][60] felix/config_params.go 466: Parsed value for LogSeverityFile:  (from config file)
2021-01-12 09:12:56.529 [INFO][60] felix/config_params.go 430: Parsing value for IPv4VXLANTunnelAddr: 20.28.116.64 (from datastore (per-host))
2021-01-12 09:12:56.529 [INFO][60] felix/config_params.go 466: Parsed value for IPv4VXLANTunnelAddr: 20.28.116.64 (from datastore (per-host))
2021-01-12 09:12:56.529 [INFO][60] felix/config_params.go 430: Parsing value for CalicoVersion: v3.17.1 (from datastore (global))
2021-01-12 09:12:56.529 [INFO][60] felix/config_params.go 466: Parsed value for CalicoVersion: v3.17.1 (from datastore (global))
2021-01-12 09:12:56.529 [INFO][60] felix/config_params.go 430: Parsing value for VXLANEnabled: true (from datastore (global))
2021-01-12 09:12:56.530 [INFO][60] felix/config_params.go 466: Parsed value for VXLANEnabled: true (from datastore (global))
2021-01-12 09:12:56.529 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.70", State:"up", Index:29}
2021-01-12 09:12:56.530 [INFO][60] felix/config_params.go 430: Parsing value for ReportingIntervalSecs: 0 (from datastore (global))
2021-01-12 09:12:56.530 [INFO][60] felix/config_params.go 466: Parsed value for ReportingIntervalSecs: 0s (from datastore (global))
2021-01-12 09:12:56.530 [INFO][60] felix/config_params.go 430: Parsing value for LogSeverityScreen: Info (from datastore (global))
2021-01-12 09:12:56.530 [INFO][60] felix/config_params.go 466: Parsed value for LogSeverityScreen: INFO (from datastore (global))
2021-01-12 09:12:56.530 [INFO][60] felix/config_params.go 430: Parsing value for ClusterGUID: 6c70b42e3f974469ac2efe1bfa298127 (from datastore (global))
2021-01-12 09:12:56.530 [INFO][60] felix/config_params.go 466: Parsed value for ClusterGUID: 6c70b42e3f974469ac2efe1bfa298127 (from datastore (global))
2021-01-12 09:12:56.531 [INFO][60] felix/config_params.go 430: Parsing value for ClusterType: typha,kdd,k8s,operator,bgp (from datastore (global))
2021-01-12 09:12:56.531 [INFO][60] felix/config_params.go 466: Parsed value for ClusterType: typha,kdd,k8s,operator,bgp (from datastore (global))
2021-01-12 09:12:56.531 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.16.32.16":set.empty{}} ifaceName="eth1.70"
2021-01-12 09:12:56.531 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=30 ifaceName="eth1.15" state="up"
2021-01-12 09:12:56.531 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.70", Addrs:set.mapSet{"172.16.32.16":set.empty{}}}
2021-01-12 09:12:56.531 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.70", Addrs:set.mapSet{"172.16.32.16":set.empty{}}}
2021-01-12 09:12:56.532 [INFO][60] felix/async_calc_graph.go 220: First flush after becoming in sync, sending InSync message.
2021-01-12 09:12:56.532 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.532 [INFO][60] felix/daemon.go 1122: Datastore now in sync.
2021-01-12 09:12:56.532 [INFO][60] felix/daemon.go 1124: Datastore in sync for first time, sending message to status reporter.
2021-01-12 09:12:56.532 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ActiveProfileUpdate update from calculation graph msg=id:<name:"kns.kube-system" > profile:<inbound_rules:<action:"allow" rule_id:"YlN-KJA93B7Fe2ih" > outbound_rules:<action:"allow" rule_id:"GUM3x2IgYhIM5cBY" > >
2021-01-12 09:12:56.533 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-pri-kns.kube-system" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.533 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-pro-kns.kube-system" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.533 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.17.254.1":set.empty{}} ifaceName="eth1.15"
2021-01-12 09:12:56.533 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ActiveProfileUpdate update from calculation graph msg=id:<name:"ksa.kube-system.metrics-server" > profile:<>
2021-01-12 09:12:56.533 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=31 ifaceName="eth1.38" state="up"
2021-01-12 09:12:56.533 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-pri-_CVSZITRyIpEmH8AB6H" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.533 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-pro-_CVSZITRyIpEmH8AB6H" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.533 [INFO][60] felix/int_dataplane.go 1325: Received *proto.WorkloadEndpointUpdate update from calculation graph msg=id:<orchestrator_id:"k8s" workload_id:"kube-system/metrics-server-7566d596c8-nsxk6" endpoint_id:"eth0" > endpoint:<state:"active" name:"cali11cc79557a4" profile_ids:"kns.kube-system" profile_ids:"ksa.kube-system.metrics-server" ipv4_nets:"20.28.116.65/32" >
2021-01-12 09:12:56.533 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"resourcequota-controller" > labels:<key:"projectcalico.org/name" value:"resourcequota-controller" >
2021-01-12 09:12:56.533 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"metrics-server" > labels:<key:"objectset.rio.cattle.io/hash" value:"e10e245e13e46a725c9dddd4f9eb239f147774fd" > labels:<key:"projectcalico.org/name" value:"metrics-server" >
2021-01-12 09:12:56.533 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"tigera-operator" name:"tigera-operator" > labels:<key:"projectcalico.org/name" value:"tigera-operator" >
2021-01-12 09:12:56.533 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"coredns" > labels:<key:"objectset.rio.cattle.io/hash" value:"bce283298811743a0386ab510f2f67ef74240c57" > labels:<key:"projectcalico.org/name" value:"coredns" >
2021-01-12 09:12:56.533 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"deployment-controller" > labels:<key:"projectcalico.org/name" value:"deployment-controller" >
2021-01-12 09:12:56.534 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-node-lease" name:"default" > labels:<key:"projectcalico.org/name" value:"default" >
2021-01-12 09:12:56.534 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"disruption-controller" > labels:<key:"projectcalico.org/name" value:"disruption-controller" >
2021-01-12 09:12:56.534 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"192.168.15.6":set.empty{}} ifaceName="eth1.38"
2021-01-12 09:12:56.534 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=32 ifaceName="eth1.403" state="up"
2021-01-12 09:12:56.534 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"calico-system" name:"default" > labels:<key:"projectcalico.org/name" value:"default" >
2021-01-12 09:12:56.534 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"horizontal-pod-autoscaler" > labels:<key:"projectcalico.org/name" value:"horizontal-pod-autoscaler" >
2021-01-12 09:12:56.535 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"calico-system" name:"calico-node" > labels:<key:"projectcalico.org/name" value:"calico-node" >
2021-01-12 09:12:56.536 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"172.27.40.11":set.empty{}} ifaceName="eth1.403"
2021-01-12 09:12:56.536 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"certificate-controller" > labels:<key:"projectcalico.org/name" value:"certificate-controller" >
2021-01-12 09:12:56.536 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=33 ifaceName="tap0" state="up"
2021-01-12 09:12:56.536 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"namespace-controller" > labels:<key:"projectcalico.org/name" value:"namespace-controller" >
2021-01-12 09:12:56.536 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"endpointslice-controller" > labels:<key:"projectcalico.org/name" value:"endpointslice-controller" >
2021-01-12 09:12:56.536 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"replicaset-controller" > labels:<key:"projectcalico.org/name" value:"replicaset-controller" >
2021-01-12 09:12:56.537 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"pvc-protection-controller" > labels:<key:"projectcalico.org/name" value:"pvc-protection-controller" >
2021-01-12 09:12:56.537 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"default" name:"default" > labels:<key:"projectcalico.org/name" value:"default" >
2021-01-12 09:12:56.537 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"tigera-operator" name:"default" > labels:<key:"projectcalico.org/name" value:"default" >
2021-01-12 09:12:56.537 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"job-controller" > labels:<key:"projectcalico.org/name" value:"job-controller" >
2021-01-12 09:12:56.537 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="tap0"
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=34 ifaceName="tap1" state="up"
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"service-account-controller" > labels:<key:"projectcalico.org/name" value:"service-account-controller" >
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"generic-garbage-collector" > labels:<key:"projectcalico.org/name" value:"generic-garbage-collector" >
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"persistent-volume-binder" > labels:<key:"projectcalico.org/name" value:"persistent-volume-binder" >
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"statefulset-controller" > labels:<key:"projectcalico.org/name" value:"statefulset-controller" >
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"cronjob-controller" > labels:<key:"projectcalico.org/name" value:"cronjob-controller" >
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"node-controller" > labels:<key:"projectcalico.org/name" value:"node-controller" >
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"replication-controller" > labels:<key:"projectcalico.org/name" value:"replication-controller" >
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"service-controller" > labels:<key:"projectcalico.org/name" value:"service-controller" >
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-public" name:"default" > labels:<key:"projectcalico.org/name" value:"default" >
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"pv-protection-controller" > labels:<key:"projectcalico.org/name" value:"pv-protection-controller" >
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="tap1"
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=35 ifaceName="tap2" state="up"
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"clusterrole-aggregation-controller" > labels:<key:"projectcalico.org/name" value:"clusterrole-aggregation-controller" >
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"endpoint-controller" > labels:<key:"projectcalico.org/name" value:"endpoint-controller" >
2021-01-12 09:12:56.538 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"pod-garbage-collector" > labels:<key:"projectcalico.org/name" value:"pod-garbage-collector" >
2021-01-12 09:12:56.539 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"expand-controller" > labels:<key:"projectcalico.org/name" value:"expand-controller" >
2021-01-12 09:12:56.539 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"calico-system" name:"calico-kube-controllers" > labels:<key:"projectcalico.org/name" value:"calico-kube-controllers" >
2021-01-12 09:12:56.539 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"daemon-set-controller" > labels:<key:"projectcalico.org/name" value:"daemon-set-controller" >
2021-01-12 09:12:56.539 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"local-path-provisioner-service-account" > labels:<key:"objectset.rio.cattle.io/hash" value:"183f35c65ffbc3064603f43f1580d8c68a2dabd4" > labels:<key:"projectcalico.org/name" value:"local-path-provisioner-service-account" >
2021-01-12 09:12:56.539 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="tap2"
2021-01-12 09:12:56.539 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"attachdetach-controller" > labels:<key:"projectcalico.org/name" value:"attachdetach-controller" >
2021-01-12 09:12:56.539 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=36 ifaceName="tap3" state="up"
2021-01-12 09:12:56.539 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"calico-system" name:"calico-typha" > labels:<key:"projectcalico.org/name" value:"calico-typha" >
2021-01-12 09:12:56.539 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"ttl-controller" > labels:<key:"projectcalico.org/name" value:"ttl-controller" >
2021-01-12 09:12:56.539 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"default" > labels:<key:"projectcalico.org/name" value:"default" >
2021-01-12 09:12:56.539 [INFO][60] felix/int_dataplane.go 1325: Received *proto.NamespaceUpdate update from calculation graph msg=id:<name:"kube-system" > labels:<key:"projectcalico.org/name" value:"kube-system" >
2021-01-12 09:12:56.539 [INFO][60] felix/int_dataplane.go 1325: Received *proto.NamespaceUpdate update from calculation graph msg=id:<name:"kube-node-lease" > labels:<key:"projectcalico.org/name" value:"kube-node-lease" >
2021-01-12 09:12:56.540 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="tap3"
2021-01-12 09:12:56.540 [INFO][60] felix/int_dataplane.go 1325: Received *proto.NamespaceUpdate update from calculation graph msg=id:<name:"calico-system" > labels:<key:"name" value:"calico-system" > labels:<key:"projectcalico.org/name" value:"calico-system" >
2021-01-12 09:12:56.540 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=37 ifaceName="tap4" state="up"
2021-01-12 09:12:56.540 [INFO][60] felix/int_dataplane.go 1325: Received *proto.NamespaceUpdate update from calculation graph msg=id:<name:"tigera-operator" > labels:<key:"name" value:"tigera-operator" > labels:<key:"projectcalico.org/name" value:"tigera-operator" >
2021-01-12 09:12:56.540 [INFO][60] felix/int_dataplane.go 1325: Received *proto.NamespaceUpdate update from calculation graph msg=id:<name:"kube-public" > labels:<key:"projectcalico.org/name" value:"kube-public" >
2021-01-12 09:12:56.540 [INFO][60] felix/int_dataplane.go 1325: Received *proto.NamespaceUpdate update from calculation graph msg=id:<name:"default" > labels:<key:"projectcalico.org/name" value:"default" >
2021-01-12 09:12:56.540 [INFO][60] felix/int_dataplane.go 1325: Received *proto.VXLANTunnelEndpointUpdate update from calculation graph msg=node:"nc1" mac:"66:14:b1:49:a0:bb" ipv4_addr:"20.28.116.64" parent_device_ip:"172.27.40.11"
2021-01-12 09:12:56.540 [INFO][60] felix/int_dataplane.go 1325: Received *proto.RouteUpdate update from calculation graph msg=ip_pool_type:VXLAN dst:"20.28.0.0/16" nat_outgoing:true
2021-01-12 09:12:56.541 [INFO][60] felix/int_dataplane.go 1325: Received *proto.RouteUpdate update from calculation graph msg=type:LOCAL_WORKLOAD ip_pool_type:VXLAN dst:"20.28.116.65/32" dst_node_name:"nc1" dst_node_ip:"172.27.40.11" same_subnet:true nat_outgoing:true local_workload:true
2021-01-12 09:12:56.541 [INFO][60] felix/int_dataplane.go 1325: Received *proto.RouteUpdate update from calculation graph msg=type:LOCAL_WORKLOAD ip_pool_type:VXLAN dst:"20.28.116.64/26" dst_node_name:"nc1" dst_node_ip:"172.27.40.11" same_subnet:true nat_outgoing:true
2021-01-12 09:12:56.541 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="tap4"
2021-01-12 09:12:56.541 [INFO][60] felix/int_dataplane.go 1325: Received *proto.RouteUpdate update from calculation graph msg=type:LOCAL_HOST dst:"172.17.0.32/32" dst_node_name:"nc1" dst_node_ip:"172.27.40.11"
2021-01-12 09:12:56.541 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=38 ifaceName="tap5" state="up"
2021-01-12 09:12:56.542 [INFO][60] felix/int_dataplane.go 1325: Received *proto.RouteUpdate update from calculation graph msg=type:LOCAL_TUNNEL ip_pool_type:VXLAN dst:"20.28.116.64/32" dst_node_name:"nc1" dst_node_ip:"172.27.40.11" same_subnet:true nat_outgoing:true tunnel_type:<vxlan:true >
2021-01-12 09:12:56.542 [INFO][60] felix/int_dataplane.go 1325: Received *proto.RouteUpdate update from calculation graph msg=type:LOCAL_HOST dst:"172.27.40.11/32" dst_node_name:"nc1" dst_node_ip:"172.27.40.11"
2021-01-12 09:12:56.542 [INFO][60] felix/int_dataplane.go 1325: Received *proto.HostMetadataUpdate update from calculation graph msg=hostname:"nc1" ipv4_addr:"172.27.40.11"
2021-01-12 09:12:56.542 [INFO][60] felix/int_dataplane.go 1325: Received *proto.IPAMPoolUpdate update from calculation graph msg=id:"20.28.0.0-16" pool:<cidr:"20.28.0.0/16" masquerade:true >
2021-01-12 09:12:56.543 [INFO][60] felix/int_dataplane.go 1325: Received *proto.InSync update from calculation graph msg=
2021-01-12 09:12:56.544 [INFO][60] felix/int_dataplane.go 1333: Datastore in sync, flushing the dataplane for the first time... timeSinceStart=169.991911ms
2021-01-12 09:12:56.544 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:56.544 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="all-vxlan-net" setType="hash:net"
2021-01-12 09:12:56.545 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="tap5"
2021-01-12 09:12:56.545 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=39 ifaceName="tap6" state="up"
2021-01-12 09:12:56.545 [INFO][60] felix/endpoint_mgr.go 561: Updating per-endpoint chains. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/metrics-server-7566d596c8-nsxk6", EndpointId:"eth0"}
2021-01-12 09:12:56.545 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-tw-cali11cc79557a4" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.545 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-pri-kns.kube-system"
2021-01-12 09:12:56.545 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-pri-_CVSZITRyIpEmH8AB6H"
2021-01-12 09:12:56.545 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-fw-cali11cc79557a4" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.545 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-pro-kns.kube-system"
2021-01-12 09:12:56.545 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-pro-_CVSZITRyIpEmH8AB6H"
2021-01-12 09:12:56.546 [INFO][60] felix/endpoint_mgr.go 592: Updating endpoint routes. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/metrics-server-7566d596c8-nsxk6", EndpointId:"eth0"}
2021-01-12 09:12:56.546 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="tap6"
2021-01-12 09:12:56.546 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=40 ifaceName="tap7" state="up"
2021-01-12 09:12:56.546 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-from-wl-dispatch" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.546 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-fw-cali11cc79557a4"
2021-01-12 09:12:56.546 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-to-wl-dispatch" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.546 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-tw-cali11cc79557a4"
2021-01-12 09:12:56.546 [INFO][60] felix/endpoint_mgr.go 1072: Skipping configuration of interface because it is oper down. ifaceName="cali11cc79557a4"
2021-01-12 09:12:56.546 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-from-host-endpoint" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.546 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="tap7"
2021-01-12 09:12:56.546 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-to-host-endpoint" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.546 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=41 ifaceName="tap8" state="up"
2021-01-12 09:12:56.546 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-from-hep-forward" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.547 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-to-hep-forward" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.547 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-from-host-endpoint" ipVersion=0x4 table="mangle"
2021-01-12 09:12:56.547 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-from-host-endpoint" ipVersion=0x4 table="raw"
2021-01-12 09:12:56.547 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-to-host-endpoint" ipVersion=0x4 table="raw"
2021-01-12 09:12:56.547 [INFO][60] felix/endpoint_mgr.go 454: Re-evaluated workload endpoint status adminUp=true failed=false known=true operUp=false status="down" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/metrics-server-7566d596c8-nsxk6", EndpointId:"eth0"}
2021-01-12 09:12:56.547 [INFO][60] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="down" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/metrics-server-7566d596c8-nsxk6", EndpointId:"eth0"}
2021-01-12 09:12:56.547 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-fip-dnat" ipVersion=0x4 table="nat"
2021-01-12 09:12:56.547 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-fip-snat" ipVersion=0x4 table="nat"
2021-01-12 09:12:56.547 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="tap8"
2021-01-12 09:12:56.547 [INFO][60] felix/masq_mgr.go 144: IPAM pools updated, refreshing iptables rule ipVersion=0x4
2021-01-12 09:12:56.547 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=42 ifaceName="tap9" state="up"
2021-01-12 09:12:56.547 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-nat-outgoing" ipVersion=0x4 table="nat"
2021-01-12 09:12:56.547 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-cidr-block" ipVersion=0x4 table="filter"
2021-01-12 09:12:56.548 [INFO][60] felix/wireguard.go 1593: Trying to connect to linkClient
2021-01-12 09:12:56.548 [INFO][60] felix/route_table.go 408: Trying to connect to netlink
2021-01-12 09:12:56.548 [INFO][60] felix/route_table.go 408: Trying to connect to netlink
2021-01-12 09:12:56.549 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="tap9"
2021-01-12 09:12:56.549 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:12:56.549 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=43 ifaceName="tap10" state="up"
2021-01-12 09:12:56.550 [INFO][60] felix/route_table.go 408: Trying to connect to netlink
2021-01-12 09:12:56.550 [INFO][60] felix/route_rule.go 182: Trying to connect to netlink
2021-01-12 09:12:56.551 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="tap10"
2021-01-12 09:12:56.551 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=44 ifaceName="tap11" state="up"
2021-01-12 09:12:56.553 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=3.984815ms
2021-01-12 09:12:56.553 [INFO][60] felix/route_table.go 1085: Failed to access interface because it doesn't exist. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.553 [INFO][60] felix/route_table.go 1153: Failed to get interface; it's down/gone. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.553 [ERROR][60] felix/route_table.go 920: Failed to get link attributes error=interface not present ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.553 [INFO][60] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.554 [INFO][60] felix/ipsets.go 749: Doing full IP set rewrite family="inet" numMembersInPendingReplace=1 setID="all-ipam-pools"
2021-01-12 09:12:56.554 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="tap11"
2021-01-12 09:12:56.554 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=51 ifaceName="cali11cc79557a4" state="up"
2021-01-12 09:12:56.554 [INFO][60] felix/ipsets.go 749: Doing full IP set rewrite family="inet" numMembersInPendingReplace=1 setID="masq-ipam-pools"
2021-01-12 09:12:56.555 [INFO][60] felix/ipsets.go 749: Doing full IP set rewrite family="inet" numMembersInPendingReplace=20 setID="this-host"
2021-01-12 09:12:56.556 [INFO][60] felix/wireguard.go 558: Public key out of sync or updated ourPublicKey=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
2021-01-12 09:12:56.556 [INFO][60] felix/ipsets.go 749: Doing full IP set rewrite family="inet" numMembersInPendingReplace=0 setID="all-vxlan-net"
2021-01-12 09:12:56.556 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="cali11cc79557a4"
2021-01-12 09:12:56.583 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="raw"
2021-01-12 09:12:56.584 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="nat"
2021-01-12 09:12:56.587 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="mangle"
2021-01-12 09:12:56.592 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:12:56.697 [INFO][60] felix/status_combiner.go 78: Endpoint down for at least one IP version id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/metrics-server-7566d596c8-nsxk6", EndpointId:"eth0"} ipVersion=0x4 status="down"
2021-01-12 09:12:56.697 [INFO][60] felix/status_combiner.go 98: Reporting combined status. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/metrics-server-7566d596c8-nsxk6", EndpointId:"eth0"} status="down"
2021-01-12 09:12:56.697 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=152.64042899999998
2021-01-12 09:12:56.697 [INFO][60] felix/int_dataplane.go 1473: Completed first update to dataplane. secsSinceStart=0.322922518
2021-01-12 09:12:56.700 [INFO][60] felix/health.go 133: Health of component changed lastReport=health.HealthReport{Live:true, Ready:false} name="int_dataplane" newReport=&health.HealthReport{Live:true, Ready:true}
2021-01-12 09:12:56.700 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.15", Addrs:set.mapSet{"172.17.254.1":set.empty{}}}
2021-01-12 09:12:56.700 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.15", Addrs:set.mapSet{"172.17.254.1":set.empty{}}}
2021-01-12 09:12:56.700 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.701 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.38", Addrs:set.mapSet{"192.168.15.6":set.empty{}}}
2021-01-12 09:12:56.701 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.38", Addrs:set.mapSet{"192.168.15.6":set.empty{}}}
2021-01-12 09:12:56.701 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.701 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1.403", Addrs:set.mapSet{"172.27.40.11":set.empty{}}}
2021-01-12 09:12:56.701 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1.403", Addrs:set.mapSet{"172.27.40.11":set.empty{}}}
2021-01-12 09:12:56.701 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.701 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"tap0", Addrs:set.mapSet{}}
2021-01-12 09:12:56.701 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"tap0", Addrs:set.mapSet{}}
2021-01-12 09:12:56.701 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.701 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"tap1", Addrs:set.mapSet{}}
2021-01-12 09:12:56.701 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"tap1", Addrs:set.mapSet{}}
2021-01-12 09:12:56.701 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.701 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"tap2", Addrs:set.mapSet{}}
2021-01-12 09:12:56.701 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"tap2", Addrs:set.mapSet{}}
2021-01-12 09:12:56.702 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.702 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"tap3", Addrs:set.mapSet{}}
2021-01-12 09:12:56.702 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"tap3", Addrs:set.mapSet{}}
2021-01-12 09:12:56.702 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.702 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"tap4", Addrs:set.mapSet{}}
2021-01-12 09:12:56.702 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"tap4", Addrs:set.mapSet{}}
2021-01-12 09:12:56.702 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.702 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"tap5", Addrs:set.mapSet{}}
2021-01-12 09:12:56.702 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"tap5", Addrs:set.mapSet{}}
2021-01-12 09:12:56.702 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.702 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"tap6", Addrs:set.mapSet{}}
2021-01-12 09:12:56.702 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"tap6", Addrs:set.mapSet{}}
2021-01-12 09:12:56.702 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.702 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"tap7", Addrs:set.mapSet{}}
2021-01-12 09:12:56.703 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"tap7", Addrs:set.mapSet{}}
2021-01-12 09:12:56.703 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.703 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"tap8", Addrs:set.mapSet{}}
2021-01-12 09:12:56.703 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"tap8", Addrs:set.mapSet{}}
2021-01-12 09:12:56.703 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.703 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"tap9", Addrs:set.mapSet{}}
2021-01-12 09:12:56.703 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"tap9", Addrs:set.mapSet{}}
2021-01-12 09:12:56.703 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.703 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"tap10", Addrs:set.mapSet{}}
2021-01-12 09:12:56.703 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"tap10", Addrs:set.mapSet{}}
2021-01-12 09:12:56.703 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.703 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"tap11", Addrs:set.mapSet{}}
2021-01-12 09:12:56.703 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"tap11", Addrs:set.mapSet{}}
2021-01-12 09:12:56.703 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:56.704 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"cali11cc79557a4", Addrs:set.mapSet{}}
2021-01-12 09:12:56.704 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"cali11cc79557a4", Addrs:set.mapSet{}}
2021-01-12 09:12:56.704 [INFO][60] felix/int_dataplane.go 1482: Dataplane updates throttled
2021-01-12 09:12:56.704 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:56.704 [INFO][60] felix/route_table.go 1085: Failed to access interface because it doesn't exist. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.704 [INFO][60] felix/route_table.go 1153: Failed to get interface; it's down/gone. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.704 [ERROR][60] felix/route_table.go 920: Failed to get link attributes error=interface not present ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.704 [INFO][60] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.704 [INFO][60] felix/ipsets.go 749: Doing full IP set rewrite family="inet" numMembersInPendingReplace=23 setID="this-host"
2021-01-12 09:12:56.718 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=13.987919
2021-01-12 09:12:56.718 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.15", State:"up", Index:30}
2021-01-12 09:12:56.718 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.38", State:"up", Index:31}
2021-01-12 09:12:56.718 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eth1.403", State:"up", Index:32}
2021-01-12 09:12:56.718 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"tap0", State:"up", Index:33}
2021-01-12 09:12:56.718 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"tap1", State:"up", Index:34}
2021-01-12 09:12:56.718 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"tap2", State:"up", Index:35}
2021-01-12 09:12:56.718 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"tap3", State:"up", Index:36}
2021-01-12 09:12:56.718 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"tap4", State:"up", Index:37}
2021-01-12 09:12:56.718 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"tap5", State:"up", Index:38}
2021-01-12 09:12:56.718 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"tap6", State:"up", Index:39}
2021-01-12 09:12:56.718 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"tap7", State:"up", Index:40}
2021-01-12 09:12:56.719 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"tap8", State:"up", Index:41}
2021-01-12 09:12:56.719 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"tap9", State:"up", Index:42}
2021-01-12 09:12:56.719 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"tap10", State:"up", Index:43}
2021-01-12 09:12:56.719 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"tap11", State:"up", Index:44}
2021-01-12 09:12:56.719 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"cali11cc79557a4", State:"up", Index:51}
2021-01-12 09:12:56.786 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:56.786 [INFO][60] felix/endpoint_mgr.go 353: Workload interface came up, marking for reconfiguration. ifaceName="cali11cc79557a4"
2021-01-12 09:12:56.786 [INFO][60] felix/endpoint_mgr.go 395: Workload interface state changed; marking for status update. ifaceName="cali11cc79557a4"
2021-01-12 09:12:56.787 [INFO][60] felix/endpoint_mgr.go 1089: Applying /proc/sys configuration to interface. ifaceName="cali11cc79557a4"
2021-01-12 09:12:56.787 [INFO][60] felix/endpoint_mgr.go 454: Re-evaluated workload endpoint status adminUp=true failed=false known=true operUp=true status="up" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/metrics-server-7566d596c8-nsxk6", EndpointId:"eth0"}
2021-01-12 09:12:56.787 [INFO][60] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="up" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/metrics-server-7566d596c8-nsxk6", EndpointId:"eth0"}
2021-01-12 09:12:56.787 [INFO][60] felix/route_table.go 1085: Failed to access interface because it doesn't exist. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.788 [INFO][60] felix/route_table.go 1153: Failed to get interface; it's down/gone. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.788 [ERROR][60] felix/route_table.go 920: Failed to get link attributes error=interface not present ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.789 [INFO][60] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.789 [INFO][60] felix/status_combiner.go 81: Endpoint up for at least one IP version id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/metrics-server-7566d596c8-nsxk6", EndpointId:"eth0"} ipVersion=0x4 status="up"
2021-01-12 09:12:56.789 [INFO][60] felix/status_combiner.go 98: Reporting combined status. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/metrics-server-7566d596c8-nsxk6", EndpointId:"eth0"} status="up"
2021-01-12 09:12:56.789 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.633984
2021-01-12 09:12:56.888 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:56.888 [INFO][60] felix/route_table.go 1085: Failed to access interface because it doesn't exist. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.888 [INFO][60] felix/route_table.go 1153: Failed to get interface; it's down/gone. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.888 [ERROR][60] felix/route_table.go 920: Failed to get link attributes error=interface not present ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.888 [INFO][60] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.888 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.394322
2021-01-12 09:12:56.994 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:56.994 [INFO][60] felix/route_table.go 1085: Failed to access interface because it doesn't exist. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.994 [INFO][60] felix/route_table.go 1153: Failed to get interface; it's down/gone. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.994 [ERROR][60] felix/route_table.go 920: Failed to get link attributes error=interface not present ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.994 [INFO][60] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:56.994 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.485036
2021-01-12 09:12:57.096 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:57.097 [INFO][60] felix/route_table.go 1085: Failed to access interface because it doesn't exist. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.097 [INFO][60] felix/route_table.go 1153: Failed to get interface; it's down/gone. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.097 [ERROR][60] felix/route_table.go 920: Failed to get link attributes error=interface not present ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.097 [INFO][60] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.097 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.380375
2021-01-12 09:12:57.206 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:57.207 [INFO][60] felix/route_table.go 1085: Failed to access interface because it doesn't exist. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.207 [INFO][60] felix/route_table.go 1153: Failed to get interface; it's down/gone. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.207 [ERROR][60] felix/route_table.go 920: Failed to get link attributes error=interface not present ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.207 [INFO][60] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.207 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.5915710000000001
2021-01-12 09:12:57.310 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:57.310 [INFO][60] felix/route_table.go 1085: Failed to access interface because it doesn't exist. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.310 [INFO][60] felix/route_table.go 1153: Failed to get interface; it's down/gone. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.310 [ERROR][60] felix/route_table.go 920: Failed to get link attributes error=interface not present ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.310 [INFO][60] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.310 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.518916
2021-01-12 09:12:57.410 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:57.411 [INFO][60] felix/route_table.go 1085: Failed to access interface because it doesn't exist. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.411 [INFO][60] felix/route_table.go 1153: Failed to get interface; it's down/gone. error=Link not found ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.411 [ERROR][60] felix/route_table.go 920: Failed to get link attributes error=interface not present ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.411 [INFO][60] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="vxlan.calico" ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:12:57.411 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.616689
2021-01-12 09:12:57.469 [INFO][60] felix/route_table.go 241: Calculated interface name regexp regex="^eth1.403$"
2021-01-12 09:12:57.474 [INFO][60] felix/vxlan_mgr.go 425: Failed to get VXLAN tunnel device, assuming it isn't present error=Link not found
2021-01-12 09:12:57.475 [INFO][60] felix/vxlan_mgr.go 514: Assigning address to VXLAN device address=20.28.116.64/32
2021-01-12 09:12:57.475 [INFO][60] felix/vxlan_mgr.go 370: VXLAN tunnel device configured
2021-01-12 09:12:57.514 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:57.520 [INFO][60] felix/vxlan_mgr.go 329: VXLAN Manager completed deferred work
2021-01-12 09:12:57.521 [INFO][60] felix/route_table.go 408: Trying to connect to netlink
2021-01-12 09:12:57.523 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=8.539754
2021-01-12 09:12:57.576 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"20.28.116.64":set.empty{}} ifaceName="vxlan.calico"
2021-01-12 09:12:57.576 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=52 ifaceName="vxlan.calico" state="up"
2021-01-12 09:12:57.576 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"vxlan.calico", Addrs:set.mapSet{"20.28.116.64":set.empty{}}}
2021-01-12 09:12:57.576 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"vxlan.calico", Addrs:set.mapSet{"20.28.116.64":set.empty{}}}
2021-01-12 09:12:57.576 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:57.576 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"vxlan.calico", State:"up", Index:52}
2021-01-12 09:12:57.577 [INFO][60] felix/iface_monitor.go 187: Netlink address update. addr="20.28.116.64" exists=true ifIndex=52
2021-01-12 09:12:57.624 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:57.626 [INFO][60] felix/ipsets.go 749: Doing full IP set rewrite family="inet" numMembersInPendingReplace=24 setID="this-host"
2021-01-12 09:12:57.639 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="nat"
2021-01-12 09:12:57.639 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:12:57.640 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="nat"
2021-01-12 09:12:57.641 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:12:57.645 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=21.56116
2021-01-12 09:12:57.734 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:57.734 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="raw"
2021-01-12 09:12:57.734 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="mangle"
2021-01-12 09:12:57.736 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="raw"
2021-01-12 09:12:57.738 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="mangle"
2021-01-12 09:12:57.741 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=7.243788
2021-01-12 09:12:57.826 [INFO][60] felix/iface_monitor.go 187: Netlink address update. addr="fe80::ecee:eeff:feee:eeee" exists=true ifIndex=51
2021-01-12 09:12:57.826 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"fe80::ecee:eeff:feee:eeee":set.empty{}} ifaceName="cali11cc79557a4"
2021-01-12 09:12:57.826 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"cali11cc79557a4", Addrs:set.mapSet{"fe80::ecee:eeff:feee:eeee":set.empty{}}}
2021-01-12 09:12:57.826 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"cali11cc79557a4", Addrs:set.mapSet{"fe80::ecee:eeff:feee:eeee":set.empty{}}}
2021-01-12 09:12:57.837 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:57.837 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.24943600000000002
bird: device1: Initializing
bird: direct1: Initializing
bird: device1: Starting
bird: device1: Initializing
bird: direct1: Initializing
bird: device1: Starting
bird: device1: Connected to table master
bird: device1: State changed to feed
bird: direct1: Starting
bird: direct1: Connected to table master
bird: direct1: State changed to feed
bird: Graceful restart started
bird: Graceful restart done
bird: Started
bird: device1: State changed to up
bird: direct1: State changed to up
bird: device1: Connected to table master
bird: device1: State changed to feed
bird: direct1: Starting
bird: direct1: Connected to table master
bird: direct1: State changed to feed
bird: Graceful restart started
bird: Graceful restart done
bird: Started
bird: device1: State changed to up
bird: direct1: State changed to up
2021-01-12 09:12:58.599 [INFO][60] felix/int_dataplane.go 1450: Dataplane updates no longer throttled
2021-01-12 09:12:58.599 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:58.599 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="nat"
2021-01-12 09:12:58.601 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="nat"
2021-01-12 09:12:58.604 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=4.516211
2021-01-12 09:12:58.607 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:58.607 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:12:58.609 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:12:58.612 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=4.995447
2021-01-12 09:12:58.654 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:58.654 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="raw"
2021-01-12 09:12:58.658 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="raw"
2021-01-12 09:12:58.660 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.713286
2021-01-12 09:12:58.702 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:58.702 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="mangle"
2021-01-12 09:12:58.704 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="mangle"
2021-01-12 09:12:58.706 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=3.832053
2021-01-12 09:12:58.990 [INFO][60] felix/calc_graph.go 413: Local endpoint updated id=WorkloadEndpoint(node=nc1, orchestrator=k8s, workload=calico-system/calico-kube-controllers-6896fd456b-gfqcv, name=eth0)
2021-01-12 09:12:58.990 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ActiveProfileUpdate update from calculation graph msg=id:<name:"ksa.calico-system.calico-kube-controllers" > profile:<>
2021-01-12 09:12:58.991 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-pri-_nzzjLvInId1gPHmQz_" ipVersion=0x4 table="filter"
2021-01-12 09:12:58.991 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="chain update" table="filter"
2021-01-12 09:12:58.991 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-pro-_nzzjLvInId1gPHmQz_" ipVersion=0x4 table="filter"
2021-01-12 09:12:58.991 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ActiveProfileUpdate update from calculation graph msg=id:<name:"kns.calico-system" > profile:<inbound_rules:<action:"allow" rule_id:"fw2fB0QZr1DPiXMg" > outbound_rules:<action:"allow" rule_id:"D66fMttEtc8Iyw4Z" > >
2021-01-12 09:12:58.991 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-pri-kns.calico-system" ipVersion=0x4 table="filter"
2021-01-12 09:12:58.991 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-pro-kns.calico-system" ipVersion=0x4 table="filter"
2021-01-12 09:12:58.992 [INFO][60] felix/int_dataplane.go 1325: Received *proto.WorkloadEndpointUpdate update from calculation graph msg=id:<orchestrator_id:"k8s" workload_id:"calico-system/calico-kube-controllers-6896fd456b-gfqcv" endpoint_id:"eth0" > endpoint:<state:"active" name:"cali6beed33b626" profile_ids:"kns.calico-system" profile_ids:"ksa.calico-system.calico-kube-controllers" ipv4_nets:"20.28.116.66/32" >
2021-01-12 09:12:58.992 [INFO][60] felix/int_dataplane.go 1325: Received *proto.RouteUpdate update from calculation graph msg=type:LOCAL_WORKLOAD ip_pool_type:VXLAN dst:"20.28.116.66/32" dst_node_name:"nc1" dst_node_ip:"172.27.40.11" same_subnet:true nat_outgoing:true local_workload:true
2021-01-12 09:12:58.992 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:58.992 [INFO][60] felix/endpoint_mgr.go 561: Updating per-endpoint chains. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"calico-system/calico-kube-controllers-6896fd456b-gfqcv", EndpointId:"eth0"}
2021-01-12 09:12:58.993 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-tw-cali6beed33b626" ipVersion=0x4 table="filter"
2021-01-12 09:12:58.993 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-pri-kns.calico-system"
2021-01-12 09:12:58.993 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-pri-_nzzjLvInId1gPHmQz_"
2021-01-12 09:12:58.994 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-fw-cali6beed33b626" ipVersion=0x4 table="filter"
2021-01-12 09:12:58.994 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-pro-kns.calico-system"
2021-01-12 09:12:58.994 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-pro-_nzzjLvInId1gPHmQz_"
2021-01-12 09:12:58.994 [INFO][60] felix/endpoint_mgr.go 592: Updating endpoint routes. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"calico-system/calico-kube-controllers-6896fd456b-gfqcv", EndpointId:"eth0"}
2021-01-12 09:12:58.994 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-from-wl-dispatch" ipVersion=0x4 table="filter"
2021-01-12 09:12:58.994 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-fw-cali6beed33b626"
2021-01-12 09:12:58.994 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-to-wl-dispatch" ipVersion=0x4 table="filter"
2021-01-12 09:12:58.994 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-tw-cali6beed33b626"
2021-01-12 09:12:58.994 [INFO][60] felix/endpoint_mgr.go 1072: Skipping configuration of interface because it is oper down. ifaceName="cali6beed33b626"
2021-01-12 09:12:58.994 [INFO][60] felix/endpoint_mgr.go 454: Re-evaluated workload endpoint status adminUp=true failed=false known=true operUp=false status="down" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"calico-system/calico-kube-controllers-6896fd456b-gfqcv", EndpointId:"eth0"}
2021-01-12 09:12:58.994 [INFO][60] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="down" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"calico-system/calico-kube-controllers-6896fd456b-gfqcv", EndpointId:"eth0"}
2021-01-12 09:12:58.994 [INFO][60] felix/route_table.go 408: Trying to connect to netlink
2021-01-12 09:12:58.996 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:12:59.003 [INFO][60] felix/status_combiner.go 78: Endpoint down for at least one IP version id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"calico-system/calico-kube-controllers-6896fd456b-gfqcv", EndpointId:"eth0"} ipVersion=0x4 status="down"
2021-01-12 09:12:59.003 [INFO][60] felix/status_combiner.go 98: Reporting combined status. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"calico-system/calico-kube-controllers-6896fd456b-gfqcv", EndpointId:"eth0"} status="down"
2021-01-12 09:12:59.003 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=10.476869
2021-01-12 09:12:59.042 [INFO][60] felix/iface_monitor.go 187: Netlink address update. addr="fe80::6414:b1ff:fe49:a0bb" exists=true ifIndex=52
2021-01-12 09:12:59.042 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"20.28.116.64":set.empty{}, "fe80::6414:b1ff:fe49:a0bb":set.empty{}} ifaceName="vxlan.calico"
2021-01-12 09:12:59.042 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"vxlan.calico", Addrs:set.mapSet{"20.28.116.64":set.empty{}, "fe80::6414:b1ff:fe49:a0bb":set.empty{}}}
2021-01-12 09:12:59.042 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"vxlan.calico", Addrs:set.mapSet{"20.28.116.64":set.empty{}, "fe80::6414:b1ff:fe49:a0bb":set.empty{}}}
2021-01-12 09:12:59.042 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:12:59.042 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:59.045 [INFO][60] felix/ipsets.go 749: Doing full IP set rewrite family="inet" numMembersInPendingReplace=24 setID="this-host"
2021-01-12 09:12:59.055 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=12.943848000000001
2021-01-12 09:12:59.081 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="cali6beed33b626"
2021-01-12 09:12:59.081 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"cali6beed33b626", Addrs:set.mapSet{}}
2021-01-12 09:12:59.081 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"cali6beed33b626", Addrs:set.mapSet{}}
2021-01-12 09:12:59.081 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:59.081 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.22375499999999998
2021-01-12 09:12:59.082 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=53 ifaceName="cali6beed33b626" state="up"
2021-01-12 09:12:59.082 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"cali6beed33b626", State:"up", Index:53}
2021-01-12 09:12:59.082 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:12:59.082 [INFO][60] felix/endpoint_mgr.go 353: Workload interface came up, marking for reconfiguration. ifaceName="cali6beed33b626"
2021-01-12 09:12:59.082 [INFO][60] felix/endpoint_mgr.go 395: Workload interface state changed; marking for status update. ifaceName="cali6beed33b626"
2021-01-12 09:12:59.082 [INFO][60] felix/endpoint_mgr.go 1089: Applying /proc/sys configuration to interface. ifaceName="cali6beed33b626"
2021-01-12 09:12:59.083 [INFO][60] felix/endpoint_mgr.go 454: Re-evaluated workload endpoint status adminUp=true failed=false known=true operUp=true status="up" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"calico-system/calico-kube-controllers-6896fd456b-gfqcv", EndpointId:"eth0"}
2021-01-12 09:12:59.083 [INFO][60] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="up" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"calico-system/calico-kube-controllers-6896fd456b-gfqcv", EndpointId:"eth0"}
2021-01-12 09:12:59.084 [INFO][60] felix/status_combiner.go 81: Endpoint up for at least one IP version id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"calico-system/calico-kube-controllers-6896fd456b-gfqcv", EndpointId:"eth0"} ipVersion=0x4 status="up"
2021-01-12 09:12:59.084 [INFO][60] felix/status_combiner.go 98: Reporting combined status. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"calico-system/calico-kube-controllers-6896fd456b-gfqcv", EndpointId:"eth0"} status="up"
2021-01-12 09:12:59.084 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=1.9487020000000002
2021-01-12 09:13:00.003 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:00.003 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:13:00.005 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:13:00.009 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.66046
2021-01-12 09:13:00.130 [INFO][60] felix/iface_monitor.go 187: Netlink address update. addr="fe80::ecee:eeff:feee:eeee" exists=true ifIndex=53
2021-01-12 09:13:00.130 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"fe80::ecee:eeff:feee:eeee":set.empty{}} ifaceName="cali6beed33b626"
2021-01-12 09:13:00.130 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"cali6beed33b626", Addrs:set.mapSet{"fe80::ecee:eeff:feee:eeee":set.empty{}}}
2021-01-12 09:13:00.130 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"cali6beed33b626", Addrs:set.mapSet{"fe80::ecee:eeff:feee:eeee":set.empty{}}}
2021-01-12 09:13:00.130 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:00.130 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.16891799999999998
2021-01-12 09:13:00.599 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:00.599 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="nat"
2021-01-12 09:13:00.601 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="nat"
2021-01-12 09:13:00.604 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=4.757281000000001
2021-01-12 09:13:00.607 [INFO][60] felix/health.go 196: Overall health status changed newStatus=&health.HealthReport{Live:true, Ready:true}
2021-01-12 09:13:00.654 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:00.654 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="raw"
2021-01-12 09:13:00.656 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="raw"
2021-01-12 09:13:00.658 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=4.257300000000001
2021-01-12 09:13:00.701 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:00.701 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="mangle"
2021-01-12 09:13:00.702 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="mangle"
2021-01-12 09:13:00.705 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=3.840152
2021-01-12 09:13:01.006 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:01.007 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:13:01.008 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:13:01.013 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.673719
2021-01-12 09:13:03.008 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:03.009 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:13:03.010 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:13:03.014 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.574748
2021-01-12 09:13:04.604 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:04.605 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="nat"
2021-01-12 09:13:04.606 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="nat"
2021-01-12 09:13:04.609 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=4.654584
2021-01-12 09:13:04.654 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:04.654 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="raw"
2021-01-12 09:13:04.656 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="raw"
2021-01-12 09:13:04.658 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=4.43866
2021-01-12 09:13:04.701 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:04.701 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="mangle"
2021-01-12 09:13:04.707 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="mangle"
2021-01-12 09:13:04.710 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=8.777396
2021-01-12 09:13:06.019 [INFO][60] felix/calc_graph.go 413: Local endpoint updated id=WorkloadEndpoint(node=nc1, orchestrator=k8s, workload=kube-system/coredns-8655855d6-tjn2l, name=eth0)
2021-01-12 09:13:06.019 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ActiveProfileUpdate update from calculation graph msg=id:<name:"ksa.kube-system.coredns" > profile:<>
2021-01-12 09:13:06.019 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-pri-_u2Tn2rSoAPffvE7JO6" ipVersion=0x4 table="filter"
2021-01-12 09:13:06.019 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="chain update" table="filter"
2021-01-12 09:13:06.020 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-pro-_u2Tn2rSoAPffvE7JO6" ipVersion=0x4 table="filter"
2021-01-12 09:13:06.020 [INFO][60] felix/int_dataplane.go 1325: Received *proto.WorkloadEndpointUpdate update from calculation graph msg=id:<orchestrator_id:"k8s" workload_id:"kube-system/coredns-8655855d6-tjn2l" endpoint_id:"eth0" > endpoint:<state:"active" name:"calicc1d21c9713" profile_ids:"kns.kube-system" profile_ids:"ksa.kube-system.coredns" ipv4_nets:"20.28.116.67/32" >
2021-01-12 09:13:06.020 [INFO][60] felix/int_dataplane.go 1325: Received *proto.RouteUpdate update from calculation graph msg=type:LOCAL_WORKLOAD ip_pool_type:VXLAN dst:"20.28.116.67/32" dst_node_name:"nc1" dst_node_ip:"172.27.40.11" same_subnet:true nat_outgoing:true local_workload:true
2021-01-12 09:13:06.020 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:06.020 [INFO][60] felix/endpoint_mgr.go 561: Updating per-endpoint chains. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/coredns-8655855d6-tjn2l", EndpointId:"eth0"}
2021-01-12 09:13:06.020 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-tw-calicc1d21c9713" ipVersion=0x4 table="filter"
2021-01-12 09:13:06.020 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-pri-_u2Tn2rSoAPffvE7JO6"
2021-01-12 09:13:06.021 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-fw-calicc1d21c9713" ipVersion=0x4 table="filter"
2021-01-12 09:13:06.021 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-pro-_u2Tn2rSoAPffvE7JO6"
2021-01-12 09:13:06.021 [INFO][60] felix/endpoint_mgr.go 592: Updating endpoint routes. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/coredns-8655855d6-tjn2l", EndpointId:"eth0"}
2021-01-12 09:13:06.021 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-from-wl-dispatch" ipVersion=0x4 table="filter"
2021-01-12 09:13:06.021 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-fw-calicc1d21c9713"
2021-01-12 09:13:06.021 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-to-wl-dispatch" ipVersion=0x4 table="filter"
2021-01-12 09:13:06.021 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-tw-calicc1d21c9713"
2021-01-12 09:13:06.021 [INFO][60] felix/endpoint_mgr.go 1072: Skipping configuration of interface because it is oper down. ifaceName="calicc1d21c9713"
2021-01-12 09:13:06.021 [INFO][60] felix/endpoint_mgr.go 454: Re-evaluated workload endpoint status adminUp=true failed=false known=true operUp=false status="down" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/coredns-8655855d6-tjn2l", EndpointId:"eth0"}
2021-01-12 09:13:06.022 [INFO][60] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="down" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/coredns-8655855d6-tjn2l", EndpointId:"eth0"}
2021-01-12 09:13:06.022 [INFO][60] felix/route_table.go 408: Trying to connect to netlink
2021-01-12 09:13:06.026 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:13:06.038 [INFO][60] felix/status_combiner.go 78: Endpoint down for at least one IP version id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/coredns-8655855d6-tjn2l", EndpointId:"eth0"} ipVersion=0x4 status="down"
2021-01-12 09:13:06.038 [INFO][60] felix/status_combiner.go 98: Reporting combined status. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/coredns-8655855d6-tjn2l", EndpointId:"eth0"} status="down"
2021-01-12 09:13:06.038 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=18.135313
2021-01-12 09:13:06.062 [INFO][60] felix/calc_graph.go 413: Local endpoint updated id=WorkloadEndpoint(node=nc1, orchestrator=k8s, workload=kube-system/local-path-provisioner-6d59f47c7-vpfwj, name=eth0)
2021-01-12 09:13:06.062 [INFO][60] felix/int_dataplane.go 1325: Received *proto.ActiveProfileUpdate update from calculation graph msg=id:<name:"ksa.kube-system.local-path-provisioner-service-account" > profile:<>
2021-01-12 09:13:06.062 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-pri-_HayIXLB85hzHkIhWER" ipVersion=0x4 table="filter"
2021-01-12 09:13:06.063 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="chain update" table="filter"
2021-01-12 09:13:06.063 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-pro-_HayIXLB85hzHkIhWER" ipVersion=0x4 table="filter"
2021-01-12 09:13:06.063 [INFO][60] felix/int_dataplane.go 1325: Received *proto.WorkloadEndpointUpdate update from calculation graph msg=id:<orchestrator_id:"k8s" workload_id:"kube-system/local-path-provisioner-6d59f47c7-vpfwj" endpoint_id:"eth0" > endpoint:<state:"active" name:"cali50d724d0bd7" profile_ids:"kns.kube-system" profile_ids:"ksa.kube-system.local-path-provisioner-service-account" ipv4_nets:"20.28.116.68/32" >
2021-01-12 09:13:06.063 [INFO][60] felix/int_dataplane.go 1325: Received *proto.RouteUpdate update from calculation graph msg=type:LOCAL_WORKLOAD ip_pool_type:VXLAN dst:"20.28.116.68/32" dst_node_name:"nc1" dst_node_ip:"172.27.40.11" same_subnet:true nat_outgoing:true local_workload:true
2021-01-12 09:13:06.063 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:06.063 [INFO][60] felix/endpoint_mgr.go 561: Updating per-endpoint chains. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/local-path-provisioner-6d59f47c7-vpfwj", EndpointId:"eth0"}
2021-01-12 09:13:06.063 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-tw-cali50d724d0bd7" ipVersion=0x4 table="filter"
2021-01-12 09:13:06.063 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-pri-_HayIXLB85hzHkIhWER"
2021-01-12 09:13:06.063 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-fw-cali50d724d0bd7" ipVersion=0x4 table="filter"
2021-01-12 09:13:06.063 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-pro-_HayIXLB85hzHkIhWER"
2021-01-12 09:13:06.063 [INFO][60] felix/endpoint_mgr.go 592: Updating endpoint routes. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/local-path-provisioner-6d59f47c7-vpfwj", EndpointId:"eth0"}
2021-01-12 09:13:06.064 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-from-wl-dispatch" ipVersion=0x4 table="filter"
2021-01-12 09:13:06.064 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-fw-cali50d724d0bd7"
2021-01-12 09:13:06.064 [INFO][60] felix/table.go 500: Queueing update of chain. chainName="cali-to-wl-dispatch" ipVersion=0x4 table="filter"
2021-01-12 09:13:06.064 [INFO][60] felix/table.go 574: Chain became referenced, marking it for programming chainName="cali-tw-cali50d724d0bd7"
2021-01-12 09:13:06.064 [INFO][60] felix/endpoint_mgr.go 1072: Skipping configuration of interface because it is oper down. ifaceName="cali50d724d0bd7"
2021-01-12 09:13:06.064 [INFO][60] felix/endpoint_mgr.go 454: Re-evaluated workload endpoint status adminUp=true failed=false known=true operUp=false status="down" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/local-path-provisioner-6d59f47c7-vpfwj", EndpointId:"eth0"}
2021-01-12 09:13:06.064 [INFO][60] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="down" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/local-path-provisioner-6d59f47c7-vpfwj", EndpointId:"eth0"}
2021-01-12 09:13:06.065 [INFO][60] felix/route_table.go 408: Trying to connect to netlink
2021-01-12 09:13:06.070 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:13:06.080 [INFO][60] felix/status_combiner.go 78: Endpoint down for at least one IP version id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/local-path-provisioner-6d59f47c7-vpfwj", EndpointId:"eth0"} ipVersion=0x4 status="down"
2021-01-12 09:13:06.080 [INFO][60] felix/status_combiner.go 98: Reporting combined status. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/local-path-provisioner-6d59f47c7-vpfwj", EndpointId:"eth0"} status="down"
2021-01-12 09:13:06.080 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=16.814217
2021-01-12 09:13:06.105 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="calicc1d21c9713"
2021-01-12 09:13:06.105 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"calicc1d21c9713", Addrs:set.mapSet{}}
2021-01-12 09:13:06.105 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"calicc1d21c9713", Addrs:set.mapSet{}}
2021-01-12 09:13:06.105 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:06.106 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.203322
2021-01-12 09:13:06.107 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=54 ifaceName="calicc1d21c9713" state="up"
2021-01-12 09:13:06.108 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"calicc1d21c9713", State:"up", Index:54}
2021-01-12 09:13:06.108 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:06.108 [INFO][60] felix/endpoint_mgr.go 353: Workload interface came up, marking for reconfiguration. ifaceName="calicc1d21c9713"
2021-01-12 09:13:06.108 [INFO][60] felix/endpoint_mgr.go 395: Workload interface state changed; marking for status update. ifaceName="calicc1d21c9713"
2021-01-12 09:13:06.109 [INFO][60] felix/endpoint_mgr.go 1089: Applying /proc/sys configuration to interface. ifaceName="calicc1d21c9713"
2021-01-12 09:13:06.109 [INFO][60] felix/endpoint_mgr.go 454: Re-evaluated workload endpoint status adminUp=true failed=false known=true operUp=true status="up" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/coredns-8655855d6-tjn2l", EndpointId:"eth0"}
2021-01-12 09:13:06.109 [INFO][60] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="up" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/coredns-8655855d6-tjn2l", EndpointId:"eth0"}
2021-01-12 09:13:06.110 [INFO][60] felix/status_combiner.go 81: Endpoint up for at least one IP version id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/coredns-8655855d6-tjn2l", EndpointId:"eth0"} ipVersion=0x4 status="up"
2021-01-12 09:13:06.110 [INFO][60] felix/status_combiner.go 98: Reporting combined status. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/coredns-8655855d6-tjn2l", EndpointId:"eth0"} status="up"
2021-01-12 09:13:06.110 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=1.882498
2021-01-12 09:13:06.151 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="cali50d724d0bd7"
2021-01-12 09:13:06.151 [INFO][60] felix/int_dataplane.go 942: Linux interface state changed. ifIndex=55 ifaceName="cali50d724d0bd7" state="up"
2021-01-12 09:13:06.151 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"cali50d724d0bd7", Addrs:set.mapSet{}}
2021-01-12 09:13:06.151 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"cali50d724d0bd7", Addrs:set.mapSet{}}
2021-01-12 09:13:06.151 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:06.151 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.26601800000000003
2021-01-12 09:13:06.152 [INFO][60] felix/int_dataplane.go 1340: Received interface update msg=&intdataplane.ifaceUpdate{Name:"cali50d724d0bd7", State:"up", Index:55}
2021-01-12 09:13:06.152 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:06.152 [INFO][60] felix/endpoint_mgr.go 353: Workload interface came up, marking for reconfiguration. ifaceName="cali50d724d0bd7"
2021-01-12 09:13:06.152 [INFO][60] felix/endpoint_mgr.go 395: Workload interface state changed; marking for status update. ifaceName="cali50d724d0bd7"
2021-01-12 09:13:06.152 [INFO][60] felix/endpoint_mgr.go 1089: Applying /proc/sys configuration to interface. ifaceName="cali50d724d0bd7"
2021-01-12 09:13:06.153 [INFO][60] felix/endpoint_mgr.go 454: Re-evaluated workload endpoint status adminUp=true failed=false known=true operUp=true status="up" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/local-path-provisioner-6d59f47c7-vpfwj", EndpointId:"eth0"}
2021-01-12 09:13:06.153 [INFO][60] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="up" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/local-path-provisioner-6d59f47c7-vpfwj", EndpointId:"eth0"}
2021-01-12 09:13:06.154 [INFO][60] felix/status_combiner.go 81: Endpoint up for at least one IP version id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/local-path-provisioner-6d59f47c7-vpfwj", EndpointId:"eth0"} ipVersion=0x4 status="up"
2021-01-12 09:13:06.155 [INFO][60] felix/status_combiner.go 98: Reporting combined status. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/local-path-provisioner-6d59f47c7-vpfwj", EndpointId:"eth0"} status="up"
2021-01-12 09:13:06.155 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.859932
2021-01-12 09:13:06.607 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:06.607 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:13:06.608 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:13:06.610 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.890075ms
2021-01-12 09:13:06.610 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.467495
2021-01-12 09:13:07.080 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:07.080 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:13:07.082 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:13:07.087 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=7.238559
2021-01-12 09:13:07.554 [INFO][60] felix/iface_monitor.go 187: Netlink address update. addr="fe80::ecee:eeff:feee:eeee" exists=true ifIndex=55
2021-01-12 09:13:07.554 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"fe80::ecee:eeff:feee:eeee":set.empty{}} ifaceName="cali50d724d0bd7"
2021-01-12 09:13:07.554 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"cali50d724d0bd7", Addrs:set.mapSet{"fe80::ecee:eeff:feee:eeee":set.empty{}}}
2021-01-12 09:13:07.554 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"cali50d724d0bd7", Addrs:set.mapSet{"fe80::ecee:eeff:feee:eeee":set.empty{}}}
2021-01-12 09:13:07.554 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:07.554 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.25044
2021-01-12 09:13:07.874 [INFO][60] felix/iface_monitor.go 187: Netlink address update. addr="fe80::ecee:eeff:feee:eeee" exists=true ifIndex=54
2021-01-12 09:13:07.874 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"fe80::ecee:eeff:feee:eeee":set.empty{}} ifaceName="calicc1d21c9713"
2021-01-12 09:13:07.874 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"calicc1d21c9713", Addrs:set.mapSet{"fe80::ecee:eeff:feee:eeee":set.empty{}}}
2021-01-12 09:13:07.874 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"calicc1d21c9713", Addrs:set.mapSet{"fe80::ecee:eeff:feee:eeee":set.empty{}}}
2021-01-12 09:13:07.874 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:07.875 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.242904
2021-01-12 09:13:08.080 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:08.080 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:13:08.082 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:13:08.087 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=7.300778
2021-01-12 09:13:10.087 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:10.087 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:13:10.091 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:13:10.096 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=9.147357999999999
2021-01-12 09:13:12.608 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:12.608 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="nat"
2021-01-12 09:13:12.610 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="nat"
2021-01-12 09:13:12.615 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=6.571425
2021-01-12 09:13:12.656 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:12.656 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="raw"
2021-01-12 09:13:12.657 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="raw"
2021-01-12 09:13:12.659 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=3.536496
2021-01-12 09:13:12.700 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:12.700 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="mangle"
2021-01-12 09:13:12.702 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="mangle"
2021-01-12 09:13:12.703 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=3.3574949999999997
2021-01-12 09:13:14.083 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:14.083 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:13:14.087 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:13:14.092 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=8.564852
2021-01-12 09:13:16.962 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:16.962 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:13:16.962 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:13:16.964 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.930915ms
2021-01-12 09:13:16.964 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.485943
2021-01-12 09:13:22.080 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:22.080 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:13:22.082 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:13:22.086 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=6.267407
2021-01-12 09:13:27.904 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:27.905 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:13:27.905 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:13:27.906 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.547224ms
2021-01-12 09:13:27.907 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=1.961389
2021-01-12 09:13:28.599 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:28.599 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="nat"
2021-01-12 09:13:28.601 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="nat"
2021-01-12 09:13:28.605 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.8685220000000005
2021-01-12 09:13:28.655 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:28.656 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="raw"
2021-01-12 09:13:28.659 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="raw"
2021-01-12 09:13:28.661 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.732122
2021-01-12 09:13:28.700 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:28.700 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="mangle"
2021-01-12 09:13:28.702 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="mangle"
2021-01-12 09:13:28.704 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=3.9291480000000005
2021-01-12 09:13:38.083 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:38.084 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:13:38.086 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:13:38.090 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=6.410311
2021-01-12 09:13:38.822 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:38.823 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:13:38.823 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:13:38.825 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.873027ms
2021-01-12 09:13:38.825 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.2451149999999997
2021-01-12 09:13:48.998 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:48.998 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:13:48.998 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:13:49.000 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.875746ms
2021-01-12 09:13:49.000 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.388089
2021-01-12 09:13:56.409 [INFO][57] monitor-addresses/startup.go 759: Using autodetected IPv4 address on interface eth1.403: 172.27.40.11/24
2021-01-12 09:13:59.102 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:13:59.103 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:13:59.103 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:13:59.105 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.899001ms
2021-01-12 09:13:59.106 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=3.21556
2021-01-12 09:14:00.599 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:14:00.599 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="nat"
2021-01-12 09:14:00.601 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="nat"
2021-01-12 09:14:00.605 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=6.264124000000001
2021-01-12 09:14:00.655 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:14:00.656 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="raw"
2021-01-12 09:14:00.657 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="raw"
2021-01-12 09:14:00.659 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=4.025167
2021-01-12 09:14:00.700 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:14:00.701 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="mangle"
2021-01-12 09:14:00.702 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="mangle"
2021-01-12 09:14:00.705 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=3.9643819999999996
2021-01-12 09:14:09.668 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:14:09.668 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:14:09.668 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:14:09.670 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.584475ms
2021-01-12 09:14:09.670 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.0773010000000003
2021-01-12 09:14:10.080 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:14:10.080 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:14:10.081 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:14:10.085 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.471356999999999
2021-01-12 09:14:19.940 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:14:19.940 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:14:19.940 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:14:19.942 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.667366ms
2021-01-12 09:14:19.942 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.094823
2021-01-12 09:14:26.505 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="cali11cc79557a4"
2021-01-12 09:14:26.505 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"cali11cc79557a4", Addrs:set.mapSet{}}
2021-01-12 09:14:26.505 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"cali11cc79557a4", Addrs:set.mapSet{}}
2021-01-12 09:14:26.505 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:14:26.506 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.46857099999999996
2021-01-12 09:14:26.506 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{"20.28.116.64":set.empty{}} ifaceName="vxlan.calico"
2021-01-12 09:14:26.507 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"vxlan.calico", Addrs:set.mapSet{"20.28.116.64":set.empty{}}}
2021-01-12 09:14:26.507 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"vxlan.calico", Addrs:set.mapSet{"20.28.116.64":set.empty{}}}
2021-01-12 09:14:26.508 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="cali6beed33b626"
2021-01-12 09:14:26.508 [INFO][60] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-01-12 09:14:26.509 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"cali6beed33b626", Addrs:set.mapSet{}}
2021-01-12 09:14:26.509 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"cali6beed33b626", Addrs:set.mapSet{}}
2021-01-12 09:14:26.509 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:14:26.510 [INFO][60] felix/ipsets.go 749: Doing full IP set rewrite family="inet" numMembersInPendingReplace=24 setID="this-host"
2021-01-12 09:14:26.510 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="calicc1d21c9713"
2021-01-12 09:14:26.511 [INFO][60] felix/int_dataplane.go 977: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="cali50d724d0bd7"
2021-01-12 09:14:26.520 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=10.516744
2021-01-12 09:14:26.520 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"calicc1d21c9713", Addrs:set.mapSet{}}
2021-01-12 09:14:26.520 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"calicc1d21c9713", Addrs:set.mapSet{}}
2021-01-12 09:14:26.520 [INFO][60] felix/int_dataplane.go 1358: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"cali50d724d0bd7", Addrs:set.mapSet{}}
2021-01-12 09:14:26.520 [INFO][60] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"cali50d724d0bd7", Addrs:set.mapSet{}}
2021-01-12 09:14:26.520 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:14:26.521 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=0.277618
2021-01-12 09:14:30.826 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:14:30.827 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:14:30.827 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:14:30.828 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.63091ms
2021-01-12 09:14:30.829 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.1508410000000002
2021-01-12 09:14:31.465 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:14:31.465 [INFO][60] felix/route_table.go 398: Queueing a resync of routing table. ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:14:31.465 [INFO][60] felix/route_table.go 398: Queueing a resync of routing table. ifaceRegex="^eth1.403$" ipVersion=0x4
2021-01-12 09:14:31.465 [INFO][60] felix/route_table.go 398: Queueing a resync of routing table. ifaceRegex="^cali.*" ipVersion=0x4
2021-01-12 09:14:31.465 [INFO][60] felix/wireguard.go 534: Queueing a resync of wireguard configuration
2021-01-12 09:14:31.465 [INFO][60] felix/route_table.go 398: Queueing a resync of routing table. ifaceRegex="^wireguard.cali$" ipVersion=0x4
2021-01-12 09:14:31.465 [INFO][60] felix/route_rule.go 172: Queueing a resync of routing rules. ipVersion=4
2021-01-12 09:14:31.471 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=6.4941189999999995
2021-01-12 09:14:41.183 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:14:41.183 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:14:41.183 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:14:41.185 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.646026ms
2021-01-12 09:14:41.185 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.016867
2021-01-12 09:14:51.488 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:14:51.488 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:14:51.489 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:14:51.490 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.612324ms
2021-01-12 09:14:51.490 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.0118810000000003
2021-01-12 09:14:56.417 [INFO][57] monitor-addresses/startup.go 759: Using autodetected IPv4 address on interface eth1.403: 172.27.40.11/24
2021-01-12 09:15:01.814 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:15:01.814 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:15:01.815 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:15:01.816 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.803835ms
2021-01-12 09:15:01.817 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.271943
2021-01-12 09:15:04.599 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:15:04.599 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="nat"
2021-01-12 09:15:04.601 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="nat"
2021-01-12 09:15:04.605 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.857302
2021-01-12 09:15:04.655 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:15:04.655 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="raw"
2021-01-12 09:15:04.657 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="raw"
2021-01-12 09:15:04.659 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=3.7496869999999998
2021-01-12 09:15:04.700 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:15:04.700 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="mangle"
2021-01-12 09:15:04.704 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="mangle"
2021-01-12 09:15:04.706 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.81405
2021-01-12 09:15:12.084 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:15:12.084 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:15:12.084 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:15:12.086 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=2.040865ms
2021-01-12 09:15:12.086 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.6457479999999998
2021-01-12 09:15:14.080 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:15:14.080 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:15:14.082 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:15:14.086 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.839944
2021-01-12 09:15:22.918 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:15:22.918 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:15:22.918 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:15:22.920 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.661274ms
2021-01-12 09:15:22.920 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.0915090000000003
2021-01-12 09:15:33.771 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:15:33.771 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:15:33.771 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:15:33.773 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.651818ms
2021-01-12 09:15:33.773 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.0836729999999997
2021-01-12 09:15:44.352 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:15:44.352 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:15:44.352 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:15:44.354 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.925403ms
2021-01-12 09:15:44.354 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.388235
2021-01-12 09:15:54.830 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:15:54.830 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:15:54.830 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:15:54.832 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.545648ms
2021-01-12 09:15:54.832 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=1.9068049999999999
2021-01-12 09:15:56.428 [INFO][57] monitor-addresses/startup.go 759: Using autodetected IPv4 address on interface eth1.403: 172.27.40.11/24
2021-01-12 09:16:01.562 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:16:01.562 [INFO][60] felix/route_table.go 398: Queueing a resync of routing table. ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:16:01.562 [INFO][60] felix/route_table.go 398: Queueing a resync of routing table. ifaceRegex="^eth1.403$" ipVersion=0x4
2021-01-12 09:16:01.562 [INFO][60] felix/route_table.go 398: Queueing a resync of routing table. ifaceRegex="^cali.*" ipVersion=0x4
2021-01-12 09:16:01.562 [INFO][60] felix/wireguard.go 534: Queueing a resync of wireguard configuration
2021-01-12 09:16:01.562 [INFO][60] felix/route_table.go 398: Queueing a resync of routing table. ifaceRegex="^wireguard.cali$" ipVersion=0x4
2021-01-12 09:16:01.562 [INFO][60] felix/route_rule.go 172: Queueing a resync of routing rules. ipVersion=4
2021-01-12 09:16:01.569 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=7.132738
2021-01-12 09:16:05.427 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:16:05.427 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:16:05.427 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:16:05.429 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.905335ms
2021-01-12 09:16:05.430 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.365029
2021-01-12 09:16:16.256 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:16:16.257 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:16:16.257 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:16:16.259 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.875083ms
2021-01-12 09:16:16.259 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.3304229999999997
2021-01-12 09:16:26.672 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:16:26.672 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:16:26.672 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:16:26.674 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.652541ms
2021-01-12 09:16:26.674 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.076647
2021-01-12 09:16:34.601 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:16:34.601 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="refresh timer" table="nat"
2021-01-12 09:16:34.603 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="nat"
2021-01-12 09:16:34.606 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.150570999999999
2021-01-12 09:16:34.662 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:16:34.662 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="refresh timer" table="raw"
2021-01-12 09:16:34.664 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="raw"
2021-01-12 09:16:34.666 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=3.494391
2021-01-12 09:16:34.708 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:16:34.708 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="refresh timer" table="mangle"
2021-01-12 09:16:34.709 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="mangle"
2021-01-12 09:16:34.712 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=3.9544970000000004
2021-01-12 09:16:36.752 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:16:36.752 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:16:36.752 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:16:36.754 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.511574ms
2021-01-12 09:16:36.754 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=1.890765
2021-01-12 09:16:44.082 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:16:44.082 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="refresh timer" table="filter"
2021-01-12 09:16:44.088 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:16:44.094 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=12.087512
2021-01-12 09:16:47.355 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:16:47.355 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:16:47.355 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:16:47.357 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.763349ms
2021-01-12 09:16:47.357 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.264671
2021-01-12 09:16:56.436 [INFO][57] monitor-addresses/startup.go 759: Using autodetected IPv4 address on interface eth1.403: 172.27.40.11/24
2021-01-12 09:16:57.804 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:16:57.804 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:16:57.804 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:16:57.806 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.546954ms
2021-01-12 09:16:57.806 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=1.920082
2021-01-12 09:17:08.108 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:17:08.109 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:17:08.109 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:17:08.110 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.797001ms
2021-01-12 09:17:08.111 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.177929
2021-01-12 09:17:12.599 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:17:12.599 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="nat"
2021-01-12 09:17:12.601 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="nat"
2021-01-12 09:17:12.605 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.9804319999999995
2021-01-12 09:17:12.655 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:17:12.655 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="raw"
2021-01-12 09:17:12.657 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="raw"
2021-01-12 09:17:12.661 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.773174
2021-01-12 09:17:12.702 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:17:12.702 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="mangle"
2021-01-12 09:17:12.704 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="mangle"
2021-01-12 09:17:12.707 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=4.164161
2021-01-12 09:17:18.620 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:17:18.620 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:17:18.620 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:17:18.621 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.52652ms
2021-01-12 09:17:18.622 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=1.91644
2021-01-12 09:17:22.080 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:17:22.080 [INFO][60] felix/table.go 934: Invalidating dataplane cache ipVersion=0x4 reason="post update" table="filter"
2021-01-12 09:17:22.081 [INFO][60] felix/table.go 596: Loading current iptables state and checking it is correct. ipVersion=0x4 table="filter"
2021-01-12 09:17:22.086 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=5.705118000000001
2021-01-12 09:17:28.897 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:17:28.897 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:17:28.897 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:17:28.899 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.619536ms
2021-01-12 09:17:28.899 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.025199
2021-01-12 09:17:31.856 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:17:31.856 [INFO][60] felix/route_table.go 398: Queueing a resync of routing table. ifaceRegex="^vxlan.calico$" ipVersion=0x4
2021-01-12 09:17:31.856 [INFO][60] felix/route_table.go 398: Queueing a resync of routing table. ifaceRegex="^eth1.403$" ipVersion=0x4
2021-01-12 09:17:31.856 [INFO][60] felix/route_table.go 398: Queueing a resync of routing table. ifaceRegex="^cali.*" ipVersion=0x4
2021-01-12 09:17:31.856 [INFO][60] felix/wireguard.go 534: Queueing a resync of wireguard configuration
2021-01-12 09:17:31.856 [INFO][60] felix/route_table.go 398: Queueing a resync of routing table. ifaceRegex="^wireguard.cali$" ipVersion=0x4
2021-01-12 09:17:31.856 [INFO][60] felix/route_rule.go 172: Queueing a resync of routing rules. ipVersion=4
2021-01-12 09:17:31.864 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=7.746464
2021-01-12 09:17:39.312 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:17:39.312 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:17:39.312 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:17:39.314 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.660841ms
2021-01-12 09:17:39.314 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=1.991269
2021-01-12 09:17:49.349 [INFO][60] felix/int_dataplane.go 1453: Applying dataplane updates
2021-01-12 09:17:49.349 [INFO][60] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2021-01-12 09:17:49.349 [INFO][60] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2021-01-12 09:17:49.351 [INFO][60] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.65357ms
2021-01-12 09:17:49.351 [INFO][60] felix/int_dataplane.go 1467: Finished applying updates to dataplane. msecToApply=2.079501
2021-01-12 09:17:56.446 [INFO][57] monitor-addresses/startup.go 759: Using autodetected IPv4 address on interface eth1.403: 172.27.40.11/24
2021-01-12 09:17:56.541 [INFO][60] felix/usagerep.go 115: Initial delay complete, doing first report
2021-01-12 09:17:56.541 [INFO][60] felix/usagerep.go 205: Reporting cluster usage/checking for deprecation warnings. alpEnabled=false calicoVersion="v3.17.1" clusterGUID="6c70b42e3f974469ac2efe1bfa298127" clusterType="typha,kdd,k8s,operator,bgp" gitRevision="2020-12-10T23:50:58+0000" kubernetesVersion="v1.18.4+k3s1" stats=calc.StatsUpdate{NumHosts:1, NumWorkloadEndpoints:4, NumHostEndpoints:0, NumPolicies:0, NumProfiles:46, NumALPPolicies:0} version="v3.17.1"
2021-01-12 09:17:57.268 [INFO][60] felix/usagerep.go 117: First report done, starting ticker

Status:

NAMESPACE↑                   NAME                                                  READY                          RESTARTS STATUS                IP                         NODE              AGE                │
│ calico-system                calico-kube-controllers-6896fd456b-gfqcv              0/1                                   0 Running               20.28.116.66               nc1               34m                │
│ calico-system                calico-node-vnxhs                                     0/1                                   0 Running               172.17.0.32                nc1               34m                │
│ calico-system                calico-typha-76dc5bbdb8-7t2z6                         1/1                                   0 Running               172.17.0.32                nc1               34m                │
│ kube-system                  coredns-8655855d6-tjn2l                               1/1                                   0 Running               20.28.116.67               nc1               48m                │
│ kube-system                  local-path-provisioner-6d59f47c7-vpfwj                1/1                                   0 Running               20.28.116.68               nc1               48m                │
│ kube-system                  metrics-server-7566d596c8-nsxk6                       1/1                                   0 Running               20.28.116.65               nc1               48m                │
│ tigera-operator              tigera-operator-587f6cb54d-ndpzt                      1/1                                   0 Running               172.17.0.32                nc1               43m                │

Your Environment

caseydavenport commented 3 years ago

Warning Unhealthy 13m kubelet, nc1 Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "52c26e79c2777f97e4c4b610976f4626a0776793244fe1de009bc9a00bee5982": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown

This appears to be the issue. I'm not sure why it can't find the binary - clearly it's there, because otherwise we wouldn't get logs.

hickersonj commented 3 years ago

Correct, it just sends that message and then doesn't fully initialize. Any suggestions on how to debug that? Or what logs to look under? Or if I can turn higher debug on for calico? Thanks!

frozenprocess commented 3 years ago

@hickersonj I went through the document and I was able to successfully deploy k3s and Calico, any tips on how to reproduce the issue?

[root@localhost ~]# uname -a
Linux localhost.localdomain 5.0.9-301.fc30.x86_64 #1 SMP Tue Apr 23 23:57:35 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost ~]# cat /etc/*-release | grep PRETTY
PRETTY_NAME="Fedora 30 (Server Edition)"
[root@localhost ~]# kubectl get pods -A
NAMESPACE         NAME                                       READY   STATUS    RESTARTS   AGE
tigera-operator   tigera-operator-657cc89589-299gj           1/1     Running   0          3m54s
calico-system     calico-typha-5c9f9cd666-m6gtz              1/1     Running   0          3m34s
calico-system     calico-node-vrqvp                          1/1     Running   0          3m33s
calico-system     calico-kube-controllers-7c87576bb9-k9bpv   1/1     Running   0          3m32s
kube-system       local-path-provisioner-7c458769fb-h6tw6    1/1     Running   0          4m21s
kube-system       metrics-server-86cbb8457f-tn77q            1/1     Running   0          4m21s
kube-system       coredns-854c77959c-l7tll                   1/1     Running   0          4m21s
hickersonj commented 3 years ago

Thank you for doing that investigation, I have also performed the same on my laptop which worked fine. To reproduce may be a bit hard to do since this is an air gap system. It is connected via a proxy using registries.yaml.

mirrors:
  docker.io:
    endpoint:
      - "http://172.17.0.1:5000"

Also it is not a full blown distro. What is interesting is I'm not seeing obvious errors other than the missing binary /bin/calico-node.

Perhaps some guidance on how to debug this, i.e. how to enable debug messages would be really helpful.

hickersonj commented 3 years ago

I added additional kernel configs namely ipip and xt_u32, but still not ready. Here is the k3s log output which includes just one cycle of the repeating missing /bin/calico-node message.

There are a couple of error messages that I'm not sure are normal:

W0113 07:23:44.295886    5472 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
I0113 07:23:45.574579    5472 log.go:172] http: TLS handshake error from 172.17.0.1:33626: remote error: tls: bad certificate
E0113 07:23:51.330669    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "33c46eb4cc05844568d3e30c12e8e5bb948d3a4594b7b46d6ff956e76dbfbee2": cannot find a qualified ippool
E0113 07:23:53.359806    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(70 

Here are the error messages parsed from the k3s log:

E0113 07:18:14.147052    5472 memcache.go:111] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:18:14.308914    5472 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
E0113 07:18:14.500791    5472 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:18:17.916698    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0e3a352f87d6e4961fdf02f07f282364f7c16eb732fc9256a97a3994bc8ee3ea": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:17.916772    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0e3a352f87d6e4961fdf02f07f282364f7c16eb732fc9256a97a3994bc8ee3ea": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:17.916798    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0e3a352f87d6e4961fdf02f07f282364f7c16eb732fc9256a97a3994bc8ee3ea": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:17.916868    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"0e3a352f87d6e4961fdf02f07f282364f7c16eb732fc9256a97a3994bc8ee3ea\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:18:17.919662    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "5c2527c8f3b05ec28dd12ca8c8da36d95e5658f00f7ef71799fcd0c9ddc8837b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:17.919715    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "5c2527c8f3b05ec28dd12ca8c8da36d95e5658f00f7ef71799fcd0c9ddc8837b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:17.919744    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "5c2527c8f3b05ec28dd12ca8c8da36d95e5658f00f7ef71799fcd0c9ddc8837b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:17.919809    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"5c2527c8f3b05ec28dd12ca8c8da36d95e5658f00f7ef71799fcd0c9ddc8837b\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:18:17.927633    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "07c09b3d68a88665496e07b7caf3acbef8e036b7b3397649522fb9be783b044b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:17.927686    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "07c09b3d68a88665496e07b7caf3acbef8e036b7b3397649522fb9be783b044b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:17.927712    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "07c09b3d68a88665496e07b7caf3acbef8e036b7b3397649522fb9be783b044b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:17.927770    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"07c09b3d68a88665496e07b7caf3acbef8e036b7b3397649522fb9be783b044b\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:18:29.180683    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "dfcdc32654b557f7c8fe92313da47a5c0fc714ec3824e48aa7744c0fb08fbdf4": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:29.180738    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "dfcdc32654b557f7c8fe92313da47a5c0fc714ec3824e48aa7744c0fb08fbdf4": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:29.180767    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "dfcdc32654b557f7c8fe92313da47a5c0fc714ec3824e48aa7744c0fb08fbdf4": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:29.180841    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"dfcdc32654b557f7c8fe92313da47a5c0fc714ec3824e48aa7744c0fb08fbdf4\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:18:31.179655    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "7f7c0b662561a88719829b67cbc4c52d6f1e79091e5a0ebbd3b344d71a476494": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:31.179703    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "7f7c0b662561a88719829b67cbc4c52d6f1e79091e5a0ebbd3b344d71a476494": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:31.179722    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "7f7c0b662561a88719829b67cbc4c52d6f1e79091e5a0ebbd3b344d71a476494": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:31.179782    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"7f7c0b662561a88719829b67cbc4c52d6f1e79091e5a0ebbd3b344d71a476494\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:18:33.176623    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "71e823092dc60d7c7efc47d5f09a35281793ff10172721e538147e80fef7148d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:33.176680    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "71e823092dc60d7c7efc47d5f09a35281793ff10172721e538147e80fef7148d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:33.176708    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "71e823092dc60d7c7efc47d5f09a35281793ff10172721e538147e80fef7148d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:33.176769    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"71e823092dc60d7c7efc47d5f09a35281793ff10172721e538147e80fef7148d\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:18:44.186425    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a1d7ae507198f3c3f03c3d0b7a9d56b29258de47be58724ece52a86e0ae776b0": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:44.186479    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a1d7ae507198f3c3f03c3d0b7a9d56b29258de47be58724ece52a86e0ae776b0": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:44.186506    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a1d7ae507198f3c3f03c3d0b7a9d56b29258de47be58724ece52a86e0ae776b0": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:44.186568    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"a1d7ae507198f3c3f03c3d0b7a9d56b29258de47be58724ece52a86e0ae776b0\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:18:44.902152    5472 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0113 07:18:45.246019    5472 request.go:621] Throttling request took 1.04903533s, request: GET:https://127.0.0.1:6444/apis/coordination.k8s.io/v1beta1?timeout=32s
E0113 07:18:46.177609    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9879af010329af80c46c2ed1d37888214a72d805bf6c5c157a6c29cfc3008047": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:46.177649    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9879af010329af80c46c2ed1d37888214a72d805bf6c5c157a6c29cfc3008047": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:46.177669    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9879af010329af80c46c2ed1d37888214a72d805bf6c5c157a6c29cfc3008047": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:46.177731    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"9879af010329af80c46c2ed1d37888214a72d805bf6c5c157a6c29cfc3008047\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:18:47.176651    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "458a9a66e5d33402e1da61399d765da1284922fe56ee4c3ae5ec1fc9a56b2c11": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:47.176700    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "458a9a66e5d33402e1da61399d765da1284922fe56ee4c3ae5ec1fc9a56b2c11": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:47.176721    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "458a9a66e5d33402e1da61399d765da1284922fe56ee4c3ae5ec1fc9a56b2c11": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:47.176771    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"458a9a66e5d33402e1da61399d765da1284922fe56ee4c3ae5ec1fc9a56b2c11\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:18:56.178675    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "56655d253d09ed804a726884e7501ab112a036ca34c6720738677014f4bc36cf": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:56.178733    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "56655d253d09ed804a726884e7501ab112a036ca34c6720738677014f4bc36cf": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:56.178765    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "56655d253d09ed804a726884e7501ab112a036ca34c6720738677014f4bc36cf": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:56.178838    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"56655d253d09ed804a726884e7501ab112a036ca34c6720738677014f4bc36cf\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:18:59.178640    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "49ec7c9b93b3742e47c24585949f2574fe85af9bba5a8236ff8cb7433a3d6c70": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:59.178737    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "49ec7c9b93b3742e47c24585949f2574fe85af9bba5a8236ff8cb7433a3d6c70": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:59.178769    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "49ec7c9b93b3742e47c24585949f2574fe85af9bba5a8236ff8cb7433a3d6c70": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:18:59.178853    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"49ec7c9b93b3742e47c24585949f2574fe85af9bba5a8236ff8cb7433a3d6c70\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:19:00.176640    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "69be039139e86ecac2f5f1d7375c506ea8b8b46721953218f254cd0e60948e17": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:00.176688    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "69be039139e86ecac2f5f1d7375c506ea8b8b46721953218f254cd0e60948e17": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:00.176708    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "69be039139e86ecac2f5f1d7375c506ea8b8b46721953218f254cd0e60948e17": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:00.176759    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"69be039139e86ecac2f5f1d7375c506ea8b8b46721953218f254cd0e60948e17\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:19:10.175570    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "4c58d70d052defe22639be765aad76ea578f8d13187bfea2556058bd62508976": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:10.175619    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "4c58d70d052defe22639be765aad76ea578f8d13187bfea2556058bd62508976": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:10.175643    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "4c58d70d052defe22639be765aad76ea578f8d13187bfea2556058bd62508976": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:10.175695    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"4c58d70d052defe22639be765aad76ea578f8d13187bfea2556058bd62508976\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:19:11.177678    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "16f4ef9c85cacd9c4c0cb96308a958b957a00cb81e47887b9ec335c383dfde9d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:11.177733    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "16f4ef9c85cacd9c4c0cb96308a958b957a00cb81e47887b9ec335c383dfde9d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:11.177759    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "16f4ef9c85cacd9c4c0cb96308a958b957a00cb81e47887b9ec335c383dfde9d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:11.177822    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"16f4ef9c85cacd9c4c0cb96308a958b957a00cb81e47887b9ec335c383dfde9d\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:19:13.177623    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b017415f82c747f5d450725f6e77cfbbdbd462a02b4d02a2943221ec4c27a0b4": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:13.177679    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b017415f82c747f5d450725f6e77cfbbdbd462a02b4d02a2943221ec4c27a0b4": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:13.177702    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b017415f82c747f5d450725f6e77cfbbdbd462a02b4d02a2943221ec4c27a0b4": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:13.177753    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"b017415f82c747f5d450725f6e77cfbbdbd462a02b4d02a2943221ec4c27a0b4\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:19:14.309151    5472 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
E0113 07:19:15.303562    5472 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0113 07:19:17.547553    5472 request.go:621] Throttling request took 1.049113712s, request: GET:https://127.0.0.1:6444/apis/networking.k8s.io/v1?timeout=32s
E0113 07:19:22.179603    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9dfb4e1e983af00c0362a4409b8210bc91bf18d7f9545bd700eb6cdeba1f1473": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:22.179676    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9dfb4e1e983af00c0362a4409b8210bc91bf18d7f9545bd700eb6cdeba1f1473": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:22.179697    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9dfb4e1e983af00c0362a4409b8210bc91bf18d7f9545bd700eb6cdeba1f1473": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:22.179750    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"9dfb4e1e983af00c0362a4409b8210bc91bf18d7f9545bd700eb6cdeba1f1473\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:19:26.179669    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8562b57ffcc033e86edaaf254f6e91b8718f93ee411b5cd4985f14b0fcae4d91": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:26.179736    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8562b57ffcc033e86edaaf254f6e91b8718f93ee411b5cd4985f14b0fcae4d91": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:26.179755    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8562b57ffcc033e86edaaf254f6e91b8718f93ee411b5cd4985f14b0fcae4d91": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:26.179806    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"8562b57ffcc033e86edaaf254f6e91b8718f93ee411b5cd4985f14b0fcae4d91\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:19:28.178638    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c7f84a75bbce84687d622c7a1d2e47d8aeeadf50ddf4c27c39bf272c2be25d8d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:28.178697    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c7f84a75bbce84687d622c7a1d2e47d8aeeadf50ddf4c27c39bf272c2be25d8d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:28.178747    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c7f84a75bbce84687d622c7a1d2e47d8aeeadf50ddf4c27c39bf272c2be25d8d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:28.178840    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f84a75bbce84687d622c7a1d2e47d8aeeadf50ddf4c27c39bf272c2be25d8d\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:19:36.177635    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "55e3848ccacff668dbf7e4458ff6b8a3b1f5dbf85b361d160cd075c5ee0314fd": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:36.177680    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "55e3848ccacff668dbf7e4458ff6b8a3b1f5dbf85b361d160cd075c5ee0314fd": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:36.177707    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "55e3848ccacff668dbf7e4458ff6b8a3b1f5dbf85b361d160cd075c5ee0314fd": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:36.177764    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"55e3848ccacff668dbf7e4458ff6b8a3b1f5dbf85b361d160cd075c5ee0314fd\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:19:37.176707    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8a9b4f3b89e600083c83abb5e7ff57cd23d8b768f7acbb3d1563d9a72228392b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:37.176765    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8a9b4f3b89e600083c83abb5e7ff57cd23d8b768f7acbb3d1563d9a72228392b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:37.176789    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8a9b4f3b89e600083c83abb5e7ff57cd23d8b768f7acbb3d1563d9a72228392b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:37.176850    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"8a9b4f3b89e600083c83abb5e7ff57cd23d8b768f7acbb3d1563d9a72228392b\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:19:41.176636    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6525d76a1166e11cefab85e19812458b6b40611e13220421f9f3763caaa7dc07": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:41.176687    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6525d76a1166e11cefab85e19812458b6b40611e13220421f9f3763caaa7dc07": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:41.176714    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6525d76a1166e11cefab85e19812458b6b40611e13220421f9f3763caaa7dc07": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:41.176768    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"6525d76a1166e11cefab85e19812458b6b40611e13220421f9f3763caaa7dc07\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:19:45.704836    5472 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:19:48.178670    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "391277c2888510aa2810a31230d1fbdd25d2276a7128b6347c6294ed4d353961": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:48.178710    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "391277c2888510aa2810a31230d1fbdd25d2276a7128b6347c6294ed4d353961": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:48.178729    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "391277c2888510aa2810a31230d1fbdd25d2276a7128b6347c6294ed4d353961": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:48.178776    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"391277c2888510aa2810a31230d1fbdd25d2276a7128b6347c6294ed4d353961\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
I0113 07:19:49.848972    5472 request.go:621] Throttling request took 1.04912862s, request: GET:https://127.0.0.1:6444/apis/admissionregistration.k8s.io/v1?timeout=32s
E0113 07:19:50.176633    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0d36f5779d3a32ba45a76ddefb88797ed256df6e2b9e9418226e6ed62a156bac": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:50.176679    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0d36f5779d3a32ba45a76ddefb88797ed256df6e2b9e9418226e6ed62a156bac": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:50.176705    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0d36f5779d3a32ba45a76ddefb88797ed256df6e2b9e9418226e6ed62a156bac": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:50.176759    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"0d36f5779d3a32ba45a76ddefb88797ed256df6e2b9e9418226e6ed62a156bac\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:19:54.177644    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "961f6f3e45627287c1f6d26a22185e33cddf2965c50994bc080b0153405d6e81": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:54.177695    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "961f6f3e45627287c1f6d26a22185e33cddf2965c50994bc080b0153405d6e81": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:54.177720    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "961f6f3e45627287c1f6d26a22185e33cddf2965c50994bc080b0153405d6e81": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:19:54.177774    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"961f6f3e45627287c1f6d26a22185e33cddf2965c50994bc080b0153405d6e81\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:20:01.178660    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "22269eae9d3811ad83302fd9be0c70a2a0e7e479d42f2a0e9e6e485c4ff22bd3": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:01.178722    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "22269eae9d3811ad83302fd9be0c70a2a0e7e479d42f2a0e9e6e485c4ff22bd3": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:01.178754    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "22269eae9d3811ad83302fd9be0c70a2a0e7e479d42f2a0e9e6e485c4ff22bd3": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:01.178826    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"22269eae9d3811ad83302fd9be0c70a2a0e7e479d42f2a0e9e6e485c4ff22bd3\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:20:03.177639    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3be3038d1706049c01314050e8631a01e6d4e039ce6d460d7639bed8ca715800": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:03.177683    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3be3038d1706049c01314050e8631a01e6d4e039ce6d460d7639bed8ca715800": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:03.177701    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3be3038d1706049c01314050e8631a01e6d4e039ce6d460d7639bed8ca715800": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:03.177748    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"3be3038d1706049c01314050e8631a01e6d4e039ce6d460d7639bed8ca715800\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:20:09.176651    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d5e345bb162faaf44e2ebfc8c2098741c76baa61c55b337ef4bb0270ea0914d7": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:09.176703    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d5e345bb162faaf44e2ebfc8c2098741c76baa61c55b337ef4bb0270ea0914d7": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:09.176723    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d5e345bb162faaf44e2ebfc8c2098741c76baa61c55b337ef4bb0270ea0914d7": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:09.176779    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"d5e345bb162faaf44e2ebfc8c2098741c76baa61c55b337ef4bb0270ea0914d7\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:20:15.177635    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b5b07615c365711180030e1d46359433fdbd99cd4863ab41343952275d46e2d5": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:15.177694    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b5b07615c365711180030e1d46359433fdbd99cd4863ab41343952275d46e2d5": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:15.177728    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b5b07615c365711180030e1d46359433fdbd99cd4863ab41343952275d46e2d5": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:15.177792    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"b5b07615c365711180030e1d46359433fdbd99cd4863ab41343952275d46e2d5\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:20:16.106364    5472 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:20:16.177653    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0e384cac104556fbe6c6275d8a3466ec7658ea6152d017b93e42f164bc85d12a": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:16.177706    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0e384cac104556fbe6c6275d8a3466ec7658ea6152d017b93e42f164bc85d12a": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:16.177733    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0e384cac104556fbe6c6275d8a3466ec7658ea6152d017b93e42f164bc85d12a": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:16.177800    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"0e384cac104556fbe6c6275d8a3466ec7658ea6152d017b93e42f164bc85d12a\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
I0113 07:20:22.150624    5472 request.go:621] Throttling request took 1.049159086s, request: GET:https://127.0.0.1:6444/apis/autoscaling/v2beta1?timeout=32s
E0113 07:20:22.176608    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a903d9d3a18d86b3d7b688c3916e38feb23b4d131909d13a0f0b581c06205a16": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:22.176656    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a903d9d3a18d86b3d7b688c3916e38feb23b4d131909d13a0f0b581c06205a16": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:22.176676    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a903d9d3a18d86b3d7b688c3916e38feb23b4d131909d13a0f0b581c06205a16": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:22.176726    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"a903d9d3a18d86b3d7b688c3916e38feb23b4d131909d13a0f0b581c06205a16\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:20:29.176694    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d5c82778038a5b99cfc23d2e9f07f2c4ce844ab243012b4d3972aec77a4538e0": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:29.176708    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a35a876fab8238619302004b6ef473b0698ea3d7c6701c367072e569745e454e": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:29.176744    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d5c82778038a5b99cfc23d2e9f07f2c4ce844ab243012b4d3972aec77a4538e0": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:29.176754    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a35a876fab8238619302004b6ef473b0698ea3d7c6701c367072e569745e454e": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:29.176765    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d5c82778038a5b99cfc23d2e9f07f2c4ce844ab243012b4d3972aec77a4538e0": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:29.176779    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a35a876fab8238619302004b6ef473b0698ea3d7c6701c367072e569745e454e": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:29.176815    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c82778038a5b99cfc23d2e9f07f2c4ce844ab243012b4d3972aec77a4538e0\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:20:29.176834    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"a35a876fab8238619302004b6ef473b0698ea3d7c6701c367072e569745e454e\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:20:34.176620    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8af7b87903ae5b398afd4e41e1888f1d5b513779eaaa20cb52d6a2d24bc4e10a": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:34.176683    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8af7b87903ae5b398afd4e41e1888f1d5b513779eaaa20cb52d6a2d24bc4e10a": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:34.176714    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8af7b87903ae5b398afd4e41e1888f1d5b513779eaaa20cb52d6a2d24bc4e10a": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:34.176785    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"8af7b87903ae5b398afd4e41e1888f1d5b513779eaaa20cb52d6a2d24bc4e10a\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:20:41.177630    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b9d34fbeb30d1a7b15c8d5d12031eb719f0c71ea242c712390f1b183ea2e5bb9": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:41.177708    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b9d34fbeb30d1a7b15c8d5d12031eb719f0c71ea242c712390f1b183ea2e5bb9": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:41.177729    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b9d34fbeb30d1a7b15c8d5d12031eb719f0c71ea242c712390f1b183ea2e5bb9": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:41.177802    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d34fbeb30d1a7b15c8d5d12031eb719f0c71ea242c712390f1b183ea2e5bb9\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:20:42.181644    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "508d893369d0c12689aae73ee4ead2e0372e87effcc2938c4292b965c3794638": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:42.181700    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "508d893369d0c12689aae73ee4ead2e0372e87effcc2938c4292b965c3794638": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:42.181726    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "508d893369d0c12689aae73ee4ead2e0372e87effcc2938c4292b965c3794638": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:42.181792    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"508d893369d0c12689aae73ee4ead2e0372e87effcc2938c4292b965c3794638\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:20:46.507668    5472 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:20:49.177651    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "5f244f9227e0d4220f6ba5810ed3fec6212599f976a72372400d9b7bbd3e1058": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:49.177703    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "5f244f9227e0d4220f6ba5810ed3fec6212599f976a72372400d9b7bbd3e1058": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:49.177726    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "5f244f9227e0d4220f6ba5810ed3fec6212599f976a72372400d9b7bbd3e1058": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:49.177776    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"5f244f9227e0d4220f6ba5810ed3fec6212599f976a72372400d9b7bbd3e1058\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
I0113 07:20:54.451917    5472 request.go:621] Throttling request took 1.049143868s, request: GET:https://127.0.0.1:6444/apis/apiextensions.k8s.io/v1?timeout=32s
E0113 07:20:55.177646    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0eae54487648ae7c86bb4d4fb38ccf1348d64108a3ed31f356c535dbad2f3f0d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:55.177694    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0eae54487648ae7c86bb4d4fb38ccf1348d64108a3ed31f356c535dbad2f3f0d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:55.177714    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0eae54487648ae7c86bb4d4fb38ccf1348d64108a3ed31f356c535dbad2f3f0d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:55.177762    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"0eae54487648ae7c86bb4d4fb38ccf1348d64108a3ed31f356c535dbad2f3f0d\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:20:57.178645    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b86488c75ca5c87d172cf8ad7166cc30439ca1a8dc4d291a6284f2d47f86c018": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:57.178688    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b86488c75ca5c87d172cf8ad7166cc30439ca1a8dc4d291a6284f2d47f86c018": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:57.178707    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b86488c75ca5c87d172cf8ad7166cc30439ca1a8dc4d291a6284f2d47f86c018": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:20:57.178756    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"b86488c75ca5c87d172cf8ad7166cc30439ca1a8dc4d291a6284f2d47f86c018\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
I0113 07:21:00.617720    5472 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"tigera-operator", Name:"tigera-operator", UID:"080ab72b-387e-460e-aca3-79afcbeaa02d", APIVersion:"apps/v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set tigera-operator-587f6cb54d to 1
I0113 07:21:00.628857    5472 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"tigera-operator", Name:"tigera-operator-587f6cb54d", UID:"e44f93ce-243c-4d3e-bf33-f23d70a5e4ee", APIVersion:"apps/v1", ResourceVersion:"510", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: tigera-operator-587f6cb54d-6gtdf
I0113 07:21:00.752476    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/670a851c-ec04-43d8-b967-e4b58df02c2e-var-lib-calico") pod "tigera-operator-587f6cb54d-6gtdf" (UID: "670a851c-ec04-43d8-b967-e4b58df02c2e")
I0113 07:21:00.752522    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tigera-operator-token-nrtgz" (UniqueName: "kubernetes.io/secret/670a851c-ec04-43d8-b967-e4b58df02c2e-tigera-operator-token-nrtgz") pod "tigera-operator-587f6cb54d-6gtdf" (UID: "670a851c-ec04-43d8-b967-e4b58df02c2e")
E0113 07:21:04.178697    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "f29f57ae0a0c979417de8d94dfc28ec9c01801f1c7e18973f4c9bad21d30b4b9": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:04.178745    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "f29f57ae0a0c979417de8d94dfc28ec9c01801f1c7e18973f4c9bad21d30b4b9": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:04.178763    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "f29f57ae0a0c979417de8d94dfc28ec9c01801f1c7e18973f4c9bad21d30b4b9": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:04.178826    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"f29f57ae0a0c979417de8d94dfc28ec9c01801f1c7e18973f4c9bad21d30b4b9\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:21:07.254001    5472 customresource_handler.go:652] error building openapi models for installations.operator.tigera.io: ERROR $root.definitions.io.tigera.operator.v1.Installation.properties.spec.properties.componentResources.items.<array>.properties.resourceRequirements.properties.limits.additionalProperties.schema has invalid property: anyOf
ERROR $root.definitions.io.tigera.operator.v1.Installation.properties.spec.properties.componentResources.items.<array>.properties.resourceRequirements.properties.requests.additionalProperties.schema has invalid property: anyOf
ERROR $root.definitions.io.tigera.operator.v1.Installation.properties.spec.properties.nodeUpdateStrategy.properties.rollingUpdate.properties.maxUnavailable has invalid property: anyOf
ERROR $root.definitions.io.tigera.operator.v1.Installation.properties.status.properties.computed.properties.componentResources.items.<array>.properties.resourceRequirements.properties.limits.additionalProperties.schema has invalid property: anyOf
ERROR $root.definitions.io.tigera.operator.v1.Installation.properties.status.properties.computed.properties.componentResources.items.<array>.properties.resourceRequirements.properties.requests.additionalProperties.schema has invalid property: anyOf
ERROR $root.definitions.io.tigera.operator.v1.Installation.properties.status.properties.computed.properties.nodeUpdateStrategy.properties.rollingUpdate.properties.maxUnavailable has invalid property: anyOf
E0113 07:21:08.176662    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ccf2e147fcd47126f927850c6f136a91e787b9eea582378e59b6b64c45903840": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:08.176706    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ccf2e147fcd47126f927850c6f136a91e787b9eea582378e59b6b64c45903840": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:08.176724    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ccf2e147fcd47126f927850c6f136a91e787b9eea582378e59b6b64c45903840": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:08.176777    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"ccf2e147fcd47126f927850c6f136a91e787b9eea582378e59b6b64c45903840\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:21:10.178647    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "1fb82ca2ae5a46444f6131c9668de80847f11e521d15c32424a365599d721f16": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:10.178707    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "1fb82ca2ae5a46444f6131c9668de80847f11e521d15c32424a365599d721f16": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:10.178732    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "1fb82ca2ae5a46444f6131c9668de80847f11e521d15c32424a365599d721f16": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:10.178791    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb82ca2ae5a46444f6131c9668de80847f11e521d15c32424a365599d721f16\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:21:14.309477    5472 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
E0113 07:21:17.009021    5472 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:21:17.084263    5472 customresource_handler.go:652] error building openapi models for networkpolicies.crd.projectcalico.org: ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.egress.items.<array>.properties.destination.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.egress.items.<array>.properties.destination.properties.ports.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.egress.items.<array>.properties.notProtocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.egress.items.<array>.properties.protocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.egress.items.<array>.properties.source.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.egress.items.<array>.properties.source.properties.ports.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.destination.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.destination.properties.ports.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.notProtocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.protocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.source.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.NetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.source.properties.ports.items.<array> has invalid property: anyOf
E0113 07:21:17.177617    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "56abe2e8db8836d4843b5a78fb377402b71d386d24b67bb13856b3e5c6e7f6ed": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:17.177678    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "56abe2e8db8836d4843b5a78fb377402b71d386d24b67bb13856b3e5c6e7f6ed": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:17.177699    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "56abe2e8db8836d4843b5a78fb377402b71d386d24b67bb13856b3e5c6e7f6ed": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:17.177765    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"56abe2e8db8836d4843b5a78fb377402b71d386d24b67bb13856b3e5c6e7f6ed\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:21:22.175595    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "173b09627fb3ed44dc63304024f01ada2875ffb1d1a96c722a4a4776d40113e0": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:22.175637    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "173b09627fb3ed44dc63304024f01ada2875ffb1d1a96c722a4a4776d40113e0": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:22.175657    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "173b09627fb3ed44dc63304024f01ada2875ffb1d1a96c722a4a4776d40113e0": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:22.175705    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"173b09627fb3ed44dc63304024f01ada2875ffb1d1a96c722a4a4776d40113e0\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:21:23.176646    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "59bb110420026c75550838358517440aacbd325dd230353dfd3cc5af1e6c71a7": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:23.176693    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "59bb110420026c75550838358517440aacbd325dd230353dfd3cc5af1e6c71a7": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:23.176712    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "59bb110420026c75550838358517440aacbd325dd230353dfd3cc5af1e6c71a7": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:23.176763    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"59bb110420026c75550838358517440aacbd325dd230353dfd3cc5af1e6c71a7\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
I0113 07:21:26.853429    5472 request.go:621] Throttling request took 1.049150842s, request: GET:https://127.0.0.1:6444/apis/networking.k8s.io/v1?timeout=32s
E0113 07:21:27.716984    5472 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:21:28.255725    5472 memcache.go:111] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:21:28.406279    5472 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:21:30.306232    5472 memcache.go:111] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:21:30.360458    5472 customresource_handler.go:652] error building openapi models for felixconfigurations.crd.projectcalico.org: ERROR $root.definitions.org.projectcalico.crd.v1.FelixConfiguration.properties.spec.properties.kubeNodePortRanges.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.FelixConfiguration.properties.spec.properties.natPortRange has invalid property: anyOf
E0113 07:21:30.636298    5472 customresource_handler.go:652] error building openapi models for hostendpoints.crd.projectcalico.org: ERROR $root.definitions.org.projectcalico.crd.v1.HostEndpoint.properties.spec.properties.ports.items.<array>.properties.protocol has invalid property: anyOf
E0113 07:21:30.681153    5472 customresource_handler.go:652] error building openapi models for globalnetworkpolicies.crd.projectcalico.org: ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.egress.items.<array>.properties.destination.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.egress.items.<array>.properties.destination.properties.ports.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.egress.items.<array>.properties.notProtocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.egress.items.<array>.properties.protocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.egress.items.<array>.properties.source.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.egress.items.<array>.properties.source.properties.ports.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.destination.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.destination.properties.ports.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.notProtocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.protocol has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.source.properties.notPorts.items.<array> has invalid property: anyOf
ERROR $root.definitions.org.projectcalico.crd.v1.GlobalNetworkPolicy.properties.spec.properties.ingress.items.<array>.properties.source.properties.ports.items.<array> has invalid property: anyOf
E0113 07:21:31.177727    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "fc9ace34c55c6d2d0439c773637d2a4d43ef764524288593a64f222da74f6129": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:31.177775    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "fc9ace34c55c6d2d0439c773637d2a4d43ef764524288593a64f222da74f6129": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:31.177795    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "fc9ace34c55c6d2d0439c773637d2a4d43ef764524288593a64f222da74f6129": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:31.177846    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"fc9ace34c55c6d2d0439c773637d2a4d43ef764524288593a64f222da74f6129\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:21:34.177637    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "38df6293a98d0c2f02a32a6fc616baef9749a62ee56359bc6f2d1fde0fe5ca4e": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:34.177689    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "38df6293a98d0c2f02a32a6fc616baef9749a62ee56359bc6f2d1fde0fe5ca4e": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:34.177717    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "34bef718517f1dd0cc4126de3aeb97a1d6e0fba60240ba923b8966f6391dbb93": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:34.177753    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "34bef718517f1dd0cc4126de3aeb97a1d6e0fba60240ba923b8966f6391dbb93": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:34.177768    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "34bef718517f1dd0cc4126de3aeb97a1d6e0fba60240ba923b8966f6391dbb93": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:34.177717    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "38df6293a98d0c2f02a32a6fc616baef9749a62ee56359bc6f2d1fde0fe5ca4e": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:34.177827    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"34bef718517f1dd0cc4126de3aeb97a1d6e0fba60240ba923b8966f6391dbb93\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:21:34.177864    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"38df6293a98d0c2f02a32a6fc616baef9749a62ee56359bc6f2d1fde0fe5ca4e\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:21:45.177614    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "f82c58b91f9c10af63919b74e200f1d538b2055f7fc0ef3e411277be1c825036": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:45.177663    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "f82c58b91f9c10af63919b74e200f1d538b2055f7fc0ef3e411277be1c825036": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:45.177684    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "f82c58b91f9c10af63919b74e200f1d538b2055f7fc0ef3e411277be1c825036": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:45.177733    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"f82c58b91f9c10af63919b74e200f1d538b2055f7fc0ef3e411277be1c825036\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:21:47.177654    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "511e70f30bbcc54bfe8d2669ce10a32e061060f1a4f3ed820ebb2db2e2c9b351": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:47.177703    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "511e70f30bbcc54bfe8d2669ce10a32e061060f1a4f3ed820ebb2db2e2c9b351": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:47.177724    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "511e70f30bbcc54bfe8d2669ce10a32e061060f1a4f3ed820ebb2db2e2c9b351": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:47.177781    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"511e70f30bbcc54bfe8d2669ce10a32e061060f1a4f3ed820ebb2db2e2c9b351\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:21:47.610916    5472 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:21:49.178629    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b84a779fc9d3b5c94baba65f54ec7043050b333a2df86c8bb214019911d28932": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:49.178677    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b84a779fc9d3b5c94baba65f54ec7043050b333a2df86c8bb214019911d28932": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:49.178704    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "b84a779fc9d3b5c94baba65f54ec7043050b333a2df86c8bb214019911d28932": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:49.178756    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"b84a779fc9d3b5c94baba65f54ec7043050b333a2df86c8bb214019911d28932\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:21:58.176603    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "7e837898c8faef5bfd3ba617bbb100e1faba3ab3b0552af391dcc258e870ee5f": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:58.176667    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "7e837898c8faef5bfd3ba617bbb100e1faba3ab3b0552af391dcc258e870ee5f": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:58.176699    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "7e837898c8faef5bfd3ba617bbb100e1faba3ab3b0552af391dcc258e870ee5f": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:21:58.176775    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"7e837898c8faef5bfd3ba617bbb100e1faba3ab3b0552af391dcc258e870ee5f\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:22:01.181608    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "1a546c741d430e070b1ee9f8a80b2dd5253c562e6306a7eaa1b954b44eacb84e": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:01.181657    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "1a546c741d430e070b1ee9f8a80b2dd5253c562e6306a7eaa1b954b44eacb84e": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:01.181676    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "1a546c741d430e070b1ee9f8a80b2dd5253c562e6306a7eaa1b954b44eacb84e": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:01.181727    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"1a546c741d430e070b1ee9f8a80b2dd5253c562e6306a7eaa1b954b44eacb84e\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
I0113 07:22:02.559442    5472 request.go:621] Throttling request took 1.049150957s, request: GET:https://127.0.0.1:6444/apis/events.k8s.io/v1beta1?timeout=32s
E0113 07:22:03.177616    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c46251e640df9a6d76152206814fae2fa1c1490087e11ef0f3931ef12af2a672": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:03.177664    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c46251e640df9a6d76152206814fae2fa1c1490087e11ef0f3931ef12af2a672": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:03.177685    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c46251e640df9a6d76152206814fae2fa1c1490087e11ef0f3931ef12af2a672": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:03.177746    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"c46251e640df9a6d76152206814fae2fa1c1490087e11ef0f3931ef12af2a672\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:22:12.177659    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ddd68d275fd6c1ef77feb94636cbe01e11ac7367f3740a78fff022efcc3506f3": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:12.177710    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ddd68d275fd6c1ef77feb94636cbe01e11ac7367f3740a78fff022efcc3506f3": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:12.177737    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ddd68d275fd6c1ef77feb94636cbe01e11ac7367f3740a78fff022efcc3506f3": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:12.177798    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"ddd68d275fd6c1ef77feb94636cbe01e11ac7367f3740a78fff022efcc3506f3\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:22:16.177593    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a5da5bde88356c1bf711414590f93e6b231ef4a142ab6482f2832b67e27cd433": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:16.177635    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a5da5bde88356c1bf711414590f93e6b231ef4a142ab6482f2832b67e27cd433": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:16.177661    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a5da5bde88356c1bf711414590f93e6b231ef4a142ab6482f2832b67e27cd433": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:16.177715    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"a5da5bde88356c1bf711414590f93e6b231ef4a142ab6482f2832b67e27cd433\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:22:17.179632    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "47941ad19bdb0c639741d0a43457a7f4f0de3e16b95fc1c9d73cc551c45591de": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:17.179689    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "47941ad19bdb0c639741d0a43457a7f4f0de3e16b95fc1c9d73cc551c45591de": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:17.179715    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "47941ad19bdb0c639741d0a43457a7f4f0de3e16b95fc1c9d73cc551c45591de": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:17.179775    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"47941ad19bdb0c639741d0a43457a7f4f0de3e16b95fc1c9d73cc551c45591de\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:22:18.112442    5472 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:22:25.176678    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c8968d8c3bffe9134ad01b3314662d25eb3dff8adbe5a70722c144dac266d3f9": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:25.176733    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c8968d8c3bffe9134ad01b3314662d25eb3dff8adbe5a70722c144dac266d3f9": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:25.176753    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c8968d8c3bffe9134ad01b3314662d25eb3dff8adbe5a70722c144dac266d3f9": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:25.176803    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"c8968d8c3bffe9134ad01b3314662d25eb3dff8adbe5a70722c144dac266d3f9\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:22:27.178629    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ab4301873f7b56c08d6bff686ff2025494febc916e0ea97132e0c27ddfdbea9e": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:27.178693    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ab4301873f7b56c08d6bff686ff2025494febc916e0ea97132e0c27ddfdbea9e": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:27.178724    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ab4301873f7b56c08d6bff686ff2025494febc916e0ea97132e0c27ddfdbea9e": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:27.178801    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4301873f7b56c08d6bff686ff2025494febc916e0ea97132e0c27ddfdbea9e\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:22:31.176612    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ac8bad2e3aa02fea9eb4869def4f6d26b9ec61f79400643314eaa76b25b38ba1": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:31.176666    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ac8bad2e3aa02fea9eb4869def4f6d26b9ec61f79400643314eaa76b25b38ba1": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:31.176687    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ac8bad2e3aa02fea9eb4869def4f6d26b9ec61f79400643314eaa76b25b38ba1": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:31.176751    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"ac8bad2e3aa02fea9eb4869def4f6d26b9ec61f79400643314eaa76b25b38ba1\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
I0113 07:22:35.060798    5472 request.go:621] Throttling request took 1.048748588s, request: GET:https://127.0.0.1:6444/apis/k3s.cattle.io/v1?timeout=32s
E0113 07:22:36.177633    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "bced2bf025a4b46e8b8dc8255ee7c5a52838972fa2f0c0edeedcda2d30ca5135": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:36.177682    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "bced2bf025a4b46e8b8dc8255ee7c5a52838972fa2f0c0edeedcda2d30ca5135": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:36.177701    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "bced2bf025a4b46e8b8dc8255ee7c5a52838972fa2f0c0edeedcda2d30ca5135": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:36.177754    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"bced2bf025a4b46e8b8dc8255ee7c5a52838972fa2f0c0edeedcda2d30ca5135\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:22:40.176675    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a0dfac7e05057f2a2b1fcb07f323441639299d80c9a65ab4fda18be2f9ab9789": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:40.176736    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a0dfac7e05057f2a2b1fcb07f323441639299d80c9a65ab4fda18be2f9ab9789": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:40.176766    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "a0dfac7e05057f2a2b1fcb07f323441639299d80c9a65ab4fda18be2f9ab9789": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:40.176834    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"a0dfac7e05057f2a2b1fcb07f323441639299d80c9a65ab4fda18be2f9ab9789\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:22:42.175623    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "303ded2e011837cb7c4c65108e01900c2bd34a21cdab4b0a5a5ef405b8d98a7a": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:42.175683    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "303ded2e011837cb7c4c65108e01900c2bd34a21cdab4b0a5a5ef405b8d98a7a": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:42.175702    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "303ded2e011837cb7c4c65108e01900c2bd34a21cdab4b0a5a5ef405b8d98a7a": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:42.175762    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"303ded2e011837cb7c4c65108e01900c2bd34a21cdab4b0a5a5ef405b8d98a7a\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:22:48.613816    5472 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:22:50.176611    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "4c411d567b3138f19c6233dd38d802f65b97148343c6dcf7045ed4b00cc80616": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:50.176671    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "4c411d567b3138f19c6233dd38d802f65b97148343c6dcf7045ed4b00cc80616": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:50.176691    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "4c411d567b3138f19c6233dd38d802f65b97148343c6dcf7045ed4b00cc80616": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:50.176754    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"4c411d567b3138f19c6233dd38d802f65b97148343c6dcf7045ed4b00cc80616\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:22:54.176624    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c7041053e19fb9ad68deab2b9a07809627e6c2122f24f8f9ceb5ce84fc66ccce": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:54.176675    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c7041053e19fb9ad68deab2b9a07809627e6c2122f24f8f9ceb5ce84fc66ccce": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:54.176698    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c7041053e19fb9ad68deab2b9a07809627e6c2122f24f8f9ceb5ce84fc66ccce": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:54.176775    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"c7041053e19fb9ad68deab2b9a07809627e6c2122f24f8f9ceb5ce84fc66ccce\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:22:56.701153    5472 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
E0113 07:22:57.178611    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "79ca344ecd4c291df3a946e41d769827c4c4f3d052ea25f6ecde677e032c4e1b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:57.178661    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "79ca344ecd4c291df3a946e41d769827c4c4f3d052ea25f6ecde677e032c4e1b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:57.178682    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "79ca344ecd4c291df3a946e41d769827c4c4f3d052ea25f6ecde677e032c4e1b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:22:57.178731    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"79ca344ecd4c291df3a946e41d769827c4c4f3d052ea25f6ecde677e032c4e1b\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:23:01.178670    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6eab32f44715bb9206375c8106d4cfd4b9149f3e2287918cabbe3370ba244627": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:01.178722    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6eab32f44715bb9206375c8106d4cfd4b9149f3e2287918cabbe3370ba244627": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:01.178741    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6eab32f44715bb9206375c8106d4cfd4b9149f3e2287918cabbe3370ba244627": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:01.178811    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"6eab32f44715bb9206375c8106d4cfd4b9149f3e2287918cabbe3370ba244627\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
I0113 07:23:07.562262    5472 request.go:621] Throttling request took 1.048905718s, request: GET:https://127.0.0.1:6444/apis/events.k8s.io/v1beta1?timeout=32s
E0113 07:23:08.177643    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "605f93dd237824f1cdc521856751d9c12312a8436cc49de7f44859f5b570c319": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:08.177703    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "605f93dd237824f1cdc521856751d9c12312a8436cc49de7f44859f5b570c319": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:08.177731    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "605f93dd237824f1cdc521856751d9c12312a8436cc49de7f44859f5b570c319": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:08.177795    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"605f93dd237824f1cdc521856751d9c12312a8436cc49de7f44859f5b570c319\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:23:10.180654    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "39d113a7758cb33d4c4043fd2ece0e8e2f9feea3373e963fd95213c2161e6710": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:10.180708    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "39d113a7758cb33d4c4043fd2ece0e8e2f9feea3373e963fd95213c2161e6710": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:10.180734    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "39d113a7758cb33d4c4043fd2ece0e8e2f9feea3373e963fd95213c2161e6710": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:10.180812    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"39d113a7758cb33d4c4043fd2ece0e8e2f9feea3373e963fd95213c2161e6710\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:23:15.179666    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "84e74b009d71bb1b698ff50ae6b372d29fc570c53792198f9832ead7b058607d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:15.179711    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "84e74b009d71bb1b698ff50ae6b372d29fc570c53792198f9832ead7b058607d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:15.179730    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "84e74b009d71bb1b698ff50ae6b372d29fc570c53792198f9832ead7b058607d": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:15.179784    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"84e74b009d71bb1b698ff50ae6b372d29fc570c53792198f9832ead7b058607d\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:23:19.115136    5472 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:23:21.175658    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "716e3e19f30922116da801a4edee5b459759b988f5dff0b07ff941fc12d3e69f": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:21.175712    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "716e3e19f30922116da801a4edee5b459759b988f5dff0b07ff941fc12d3e69f": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:21.175737    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "716e3e19f30922116da801a4edee5b459759b988f5dff0b07ff941fc12d3e69f": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:21.175785    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"716e3e19f30922116da801a4edee5b459759b988f5dff0b07ff941fc12d3e69f\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:23:22.176703    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8a1debf45a1d15b182b2c8bdcc02280afd1b7139191110cb6415ccbb7946e239": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:22.176742    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8a1debf45a1d15b182b2c8bdcc02280afd1b7139191110cb6415ccbb7946e239": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:22.176761    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8a1debf45a1d15b182b2c8bdcc02280afd1b7139191110cb6415ccbb7946e239": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:22.176808    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"8a1debf45a1d15b182b2c8bdcc02280afd1b7139191110cb6415ccbb7946e239\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:23:26.176669    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d937e59aa41af282d2e880a7d9fc1243a74105625b4014d5de0cfa79f489a441": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:26.176765    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d937e59aa41af282d2e880a7d9fc1243a74105625b4014d5de0cfa79f489a441": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:26.176792    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d937e59aa41af282d2e880a7d9fc1243a74105625b4014d5de0cfa79f489a441": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:26.176886    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"d937e59aa41af282d2e880a7d9fc1243a74105625b4014d5de0cfa79f489a441\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:23:33.177662    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "eef917fd7449669c330f39273410acecce3a7ea37fa5bc4b4938b686da213df1": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:33.177709    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "eef917fd7449669c330f39273410acecce3a7ea37fa5bc4b4938b686da213df1": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:33.177730    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "eef917fd7449669c330f39273410acecce3a7ea37fa5bc4b4938b686da213df1": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:33.177784    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"eef917fd7449669c330f39273410acecce3a7ea37fa5bc4b4938b686da213df1\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:23:36.175625    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0caa6dd2f601e9890b8bcccef3c927f0df6ea5f984320c23da718d9e5d63350b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:36.175669    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0caa6dd2f601e9890b8bcccef3c927f0df6ea5f984320c23da718d9e5d63350b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:36.175689    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0caa6dd2f601e9890b8bcccef3c927f0df6ea5f984320c23da718d9e5d63350b": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:36.175745    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"0caa6dd2f601e9890b8bcccef3c927f0df6ea5f984320c23da718d9e5d63350b\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
I0113 07:23:37.494456    5472 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"calico-system", Name:"calico-typha", UID:"5501fdd6-e71f-4c51-b7be-3663a54875cc", APIVersion:"apps/v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set calico-typha-d7c95d598 to 1
I0113 07:23:37.820917    5472 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"calico-system", Name:"calico-node", UID:"a94d3eb3-331e-4d67-a68c-cc61115efa82", APIVersion:"apps/v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: calico-node-dqxd5
I0113 07:23:37.823815    5472 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"calico-system", Name:"calico-kube-controllers", UID:"90ffdec9-9f59-480f-b3c7-9f3b8c4d0485", APIVersion:"apps/v1", ResourceVersion:"719", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set calico-kube-controllers-6896fd456b to 1
E0113 07:23:37.863364    5472 daemon_controller.go:292] calico-system/calico-node failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node", GenerateName:"", Namespace:"calico-system", SelfLink:"/apis/apps/v1/namespaces/calico-system/daemonsets/calico-node", UID:"a94d3eb3-331e-4d67-a68c-cc61115efa82", ResourceVersion:"714", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63746119417, loc:(*time.Location)(0x6fe3220)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"operator.tigera.io/v1", Kind:"Installation", Name:"default", UID:"69861aa0-e5d6-4df3-9888-329cfca7f6bb", Controller:(*bool)(0xc00be05277), BlockOwnerDeletion:(*bool)(0xc00be05278)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"operator", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc008f08340), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc008f08380)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc008f083a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-node"}, Annotations:map[string]string{"hash.operator.tigera.io/cni-config":"8cdc037baa5a288802a5754d2ab6c74d9b8b3ada", "hash.operator.tigera.io/node-cert":"0b7ef7ec5e63b989b54ba1541a9f1497667c3485", "hash.operator.tigera.io/typha-ca":"4d2c037b62b02893daa4343be556dce2e377be3b"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc008f083c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"var-run-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc008f083e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"var-lib-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc008f08400), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc008f08420), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"policysync", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc008f08440), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"typha-ca", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc006724a00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"felix-certs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006724e80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"cni-bin-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc008f084a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"cni-net-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc008f084e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"cni-log-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc008f08500), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"flexvol-driver-host", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc008f08520), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"flexvol-driver", Image:"docker.io/calico/pod2daemon-flexvol:v3.17.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"flexvol-driver-host", ReadOnly:false, MountPath:"/host/driver", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002b5f450), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"install-cni", Image:"docker.io/calico/cni:v3.17.1", Command:[]string{"/opt/cni/bin/install"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"CNI_CONF_NAME", Value:"10-calico.conflist", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"SLEEP", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CNI_NET_DIR", Value:"/etc/cni/net.d", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CNI_NETWORK_CONFIG", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc008f086a0)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.43.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-bin-dir", ReadOnly:false, MountPath:"/host/opt/cni/bin", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-net-dir", ReadOnly:false, MountPath:"/host/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002b5f540), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"calico-node", Image:"docker.io/calico/node:v3.17.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"WAIT_FOR_DATASTORE", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CLUSTER_TYPE", Value:"k8s,operator,bgp", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_DISABLE_FILE_LOGGING", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_DEFAULTENDPOINTTOHOSTACTION", Value:"ACCEPT", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_HEALTHENABLED", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"NODENAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc008f08540)}, v1.EnvVar{Name:"NAMESPACE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc008f08580)}, v1.EnvVar{Name:"FELIX_TYPHAK8SNAMESPACE", Value:"calico-system", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHAK8SSERVICENAME", Value:"calico-typha", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACAFILE", Value:"/typha-ca/caBundle", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACERTFILE", Value:"/felix-certs/cert.crt", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHAKEYFILE", Value:"/felix-certs/key.key", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACN", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc008f085c0)}, v1.EnvVar{Name:"FELIX_TYPHAURISAN", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc008f08620)}, v1.EnvVar{Name:"CALICO_IPV4POOL_CIDR", Value:"20.28.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_VXLAN", Value:"CrossSubnet", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_BLOCK_SIZE", Value:"26", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_NODE_SELECTOR", Value:"all()", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_NETWORKING_BACKEND", Value:"bird", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP", Value:"autodetect", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP_AUTODETECTION_METHOD", Value:"first-found", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP6", Value:"none", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPV6SUPPORT", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPTABLESBACKEND", Value:"auto", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.43.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-run-calico", ReadOnly:false, MountPath:"/var/run/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-lib-calico", ReadOnly:false, MountPath:"/var/lib/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"policysync", ReadOnly:false, MountPath:"/var/run/nodeagent", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"typha-ca", ReadOnly:true, MountPath:"/typha-ca", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"felix-certs", ReadOnly:true, MountPath:"/felix-certs", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-log-dir", ReadOnly:true, MountPath:"/var/log/calico/cni", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc014fe36b0), ReadinessProbe:(*v1.Probe)(0xc014fe36e0), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002b5f3b0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00be05768), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-node", DeprecatedServiceAccount:"calico-node", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc018369b20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"calico-priority", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00f2f3448)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00be05ce8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "calico-node": the object has been modified; please apply your changes to the latest version and try again
E0113 07:23:37.889977    5472 daemon_controller.go:292] calico-system/calico-node failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node", GenerateName:"", Namespace:"calico-system", SelfLink:"/apis/apps/v1/namespaces/calico-system/daemonsets/calico-node", UID:"a94d3eb3-331e-4d67-a68c-cc61115efa82", ResourceVersion:"729", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63746119417, loc:(*time.Location)(0x6fe3220)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"operator.tigera.io/v1", Kind:"Installation", Name:"default", UID:"69861aa0-e5d6-4df3-9888-329cfca7f6bb", Controller:(*bool)(0xc00a08fa67), BlockOwnerDeletion:(*bool)(0xc00a08fa68)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"k3s", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0083a0d80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0083a0da0)}, v1.ManagedFieldsEntry{Manager:"operator", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0083a0dc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0083a0de0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0083a0e00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-node"}, Annotations:map[string]string{"hash.operator.tigera.io/cni-config":"8cdc037baa5a288802a5754d2ab6c74d9b8b3ada", "hash.operator.tigera.io/node-cert":"0b7ef7ec5e63b989b54ba1541a9f1497667c3485", "hash.operator.tigera.io/typha-ca":"4d2c037b62b02893daa4343be556dce2e377be3b"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0083a0e20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"var-run-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0083a0e40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"var-lib-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0083a0e60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0083a0ee0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"policysync", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0083a0f00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"typha-ca", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc009fdc800), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"felix-certs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc009fdc840), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"cni-bin-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0083a0f20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"cni-net-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0083a0f40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"cni-log-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0083a0f60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"flexvol-driver-host", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0083a0f80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"flexvol-driver", Image:"docker.io/calico/pod2daemon-flexvol:v3.17.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"flexvol-driver-host", ReadOnly:false, MountPath:"/host/driver", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000ed3d10), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"install-cni", Image:"docker.io/calico/cni:v3.17.1", Command:[]string{"/opt/cni/bin/install"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"CNI_CONF_NAME", Value:"10-calico.conflist", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"SLEEP", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CNI_NET_DIR", Value:"/etc/cni/net.d", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CNI_NETWORK_CONFIG", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0083a10e0)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.43.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-bin-dir", ReadOnly:false, MountPath:"/host/opt/cni/bin", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-net-dir", ReadOnly:false, MountPath:"/host/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000ed3ea0), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"calico-node", Image:"docker.io/calico/node:v3.17.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"WAIT_FOR_DATASTORE", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CLUSTER_TYPE", Value:"k8s,operator,bgp", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_DISABLE_FILE_LOGGING", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_DEFAULTENDPOINTTOHOSTACTION", Value:"ACCEPT", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_HEALTHENABLED", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"NODENAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0083a0fa0)}, v1.EnvVar{Name:"NAMESPACE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0083a1020)}, v1.EnvVar{Name:"FELIX_TYPHAK8SNAMESPACE", Value:"calico-system", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHAK8SSERVICENAME", Value:"calico-typha", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACAFILE", Value:"/typha-ca/caBundle", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACERTFILE", Value:"/felix-certs/cert.crt", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHAKEYFILE", Value:"/felix-certs/key.key", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACN", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0083a1060)}, v1.EnvVar{Name:"FELIX_TYPHAURISAN", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0083a1080)}, v1.EnvVar{Name:"CALICO_IPV4POOL_CIDR", Value:"20.28.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_VXLAN", Value:"CrossSubnet", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_BLOCK_SIZE", Value:"26", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_NODE_SELECTOR", Value:"all()", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_NETWORKING_BACKEND", Value:"bird", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP", Value:"autodetect", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP_AUTODETECTION_METHOD", Value:"first-found", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP6", Value:"none", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPV6SUPPORT", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPTABLESBACKEND", Value:"auto", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.43.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-run-calico", ReadOnly:false, MountPath:"/var/run/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-lib-calico", ReadOnly:false, MountPath:"/var/lib/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"policysync", ReadOnly:false, MountPath:"/var/run/nodeagent", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"typha-ca", ReadOnly:true, MountPath:"/typha-ca", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"felix-certs", ReadOnly:true, MountPath:"/felix-certs", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-log-dir", ReadOnly:true, MountPath:"/var/log/calico/cni", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc005d4d950), ReadinessProbe:(*v1.Probe)(0xc005d4d980), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000ed3bd0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00a08ff28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-node", DeprecatedServiceAccount:"calico-node", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0175d7340), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"calico-priority", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc008d411d0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc008e240e8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "calico-node": the object has been modified; please apply your changes to the latest version and try again
I0113 07:23:37.968293    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "typha-ca" (UniqueName: "kubernetes.io/configmap/406982aa-db79-4205-927f-429eecce9807-typha-ca") pod "calico-node-dqxd5" (UID: "406982aa-db79-4205-927f-429eecce9807")
I0113 07:23:37.968342    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/406982aa-db79-4205-927f-429eecce9807-cni-bin-dir") pod "calico-node-dqxd5" (UID: "406982aa-db79-4205-927f-429eecce9807")
I0113 07:23:37.968492    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "policysync" (UniqueName: "kubernetes.io/host-path/406982aa-db79-4205-927f-429eecce9807-policysync") pod "calico-node-dqxd5" (UID: "406982aa-db79-4205-927f-429eecce9807")
I0113 07:23:37.968539    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/406982aa-db79-4205-927f-429eecce9807-cni-net-dir") pod "calico-node-dqxd5" (UID: "406982aa-db79-4205-927f-429eecce9807")
I0113 07:23:37.968578    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "felix-certs" (UniqueName: "kubernetes.io/secret/406982aa-db79-4205-927f-429eecce9807-felix-certs") pod "calico-node-dqxd5" (UID: "406982aa-db79-4205-927f-429eecce9807")
I0113 07:23:37.968627    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-log-dir" (UniqueName: "kubernetes.io/host-path/406982aa-db79-4205-927f-429eecce9807-cni-log-dir") pod "calico-node-dqxd5" (UID: "406982aa-db79-4205-927f-429eecce9807")
I0113 07:23:37.968747    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-node-token-2zpfk" (UniqueName: "kubernetes.io/secret/406982aa-db79-4205-927f-429eecce9807-calico-node-token-2zpfk") pod "calico-node-dqxd5" (UID: "406982aa-db79-4205-927f-429eecce9807")
I0113 07:23:37.968871    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/406982aa-db79-4205-927f-429eecce9807-lib-modules") pod "calico-node-dqxd5" (UID: "406982aa-db79-4205-927f-429eecce9807")
I0113 07:23:37.968918    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/406982aa-db79-4205-927f-429eecce9807-var-run-calico") pod "calico-node-dqxd5" (UID: "406982aa-db79-4205-927f-429eecce9807")
I0113 07:23:37.968959    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/406982aa-db79-4205-927f-429eecce9807-var-lib-calico") pod "calico-node-dqxd5" (UID: "406982aa-db79-4205-927f-429eecce9807")
I0113 07:23:37.968994    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/406982aa-db79-4205-927f-429eecce9807-xtables-lock") pod "calico-node-dqxd5" (UID: "406982aa-db79-4205-927f-429eecce9807")
I0113 07:23:37.969034    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvol-driver-host" (UniqueName: "kubernetes.io/host-path/406982aa-db79-4205-927f-429eecce9807-flexvol-driver-host") pod "calico-node-dqxd5" (UID: "406982aa-db79-4205-927f-429eecce9807")
E0113 07:23:38.070406    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.070585    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.070858    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.070891    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.071177    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.071223    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.071498    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.071529    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.071865    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.071895    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.072219    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.072257    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.072534    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.072576    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.072875    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.072901    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.073473    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.073508    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.073788    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.073817    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.074071    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.074100    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.074360    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.074428    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.074745    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.074780    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.075075    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.075104    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.075334    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.075359    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.075675    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.075703    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.075940    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.075964    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.076253    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.076278    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.076634    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.076688    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.077050    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.077110    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.077437    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.077501    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.079327    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.079370    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
I0113 07:23:38.507923    5472 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"calico-system", Name:"calico-typha-d7c95d598", UID:"e6959a33-b318-447c-a1c8-bba64d730a17", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: calico-typha-d7c95d598-wgvvd
E0113 07:23:38.524516    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.524582    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.528686    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.528731    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.529102    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.529139    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.535693    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.535744    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.536071    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.536108    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.536400    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.536444    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.606346    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.606392    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.606686    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.606724    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.606985    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.607013    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.607318    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.607344    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.607643    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.607670    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.607899    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.607926    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.608184    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.608209    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.608472    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.608500    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.608741    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.608765    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.674906    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.674940    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
I0113 07:23:38.674972    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "typha-certs" (UniqueName: "kubernetes.io/secret/9e7c3d5e-91e5-4e2e-aae5-91339b2b660f-typha-certs") pod "calico-typha-d7c95d598-wgvvd" (UID: "9e7c3d5e-91e5-4e2e-aae5-91339b2b660f")
E0113 07:23:38.675233    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.675259    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
I0113 07:23:38.675290    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-typha-token-9w9lk" (UniqueName: "kubernetes.io/secret/9e7c3d5e-91e5-4e2e-aae5-91339b2b660f-calico-typha-token-9w9lk") pod "calico-typha-d7c95d598-wgvvd" (UID: "9e7c3d5e-91e5-4e2e-aae5-91339b2b660f")
E0113 07:23:38.675561    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.675598    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.675842    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.675872    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.676166    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.676189    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.676456    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.676487    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.676771    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.676803    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
I0113 07:23:38.676834    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "typha-ca" (UniqueName: "kubernetes.io/configmap/9e7c3d5e-91e5-4e2e-aae5-91339b2b660f-typha-ca") pod "calico-typha-d7c95d598-wgvvd" (UID: "9e7c3d5e-91e5-4e2e-aae5-91339b2b660f")
E0113 07:23:38.677100    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.677126    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.677355    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.677382    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.777362    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.777463    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.777743    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.777770    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.778031    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.778057    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.778351    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.778432    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.778739    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.778767    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.779026    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.779052    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.779338    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.779366    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.779711    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.779748    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.780072    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.780108    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.780391    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.780426    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.780721    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.780756    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.781028    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.781062    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.781479    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.781516    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.782005    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.782045    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.782289    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.782319    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.782794    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.782828    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.783302    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.783333    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.784981    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.785019    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
I0113 07:23:38.832382    5472 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"calico-system", Name:"calico-kube-controllers-6896fd456b", UID:"38d7c8fa-8f96-4b4d-8623-a64341e11e03", APIVersion:"apps/v1", ResourceVersion:"723", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: calico-kube-controllers-6896fd456b-hgh64
E0113 07:23:38.839987    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.840036    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.853572    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.853618    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.909832    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.909887    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.910181    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.910219    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.910502    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.910537    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.978798    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.978842    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
I0113 07:23:38.978881    5472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-kube-controllers-token-5lfrk" (UniqueName: "kubernetes.io/secret/34bc27b5-97af-4d2d-ba36-918655706ec2-calico-kube-controllers-token-5lfrk") pod "calico-kube-controllers-6896fd456b-hgh64" (UID: "34bc27b5-97af-4d2d-ba36-918655706ec2")
E0113 07:23:38.979147    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.979176    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:38.979397    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:38.979438    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.079456    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.079496    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.079791    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.079818    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.080104    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.080133    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.080388    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.080463    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.080736    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.080761    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.083514    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.083546    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.164777    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "56f2680f3a48d60b1eae68c831cdc588e29ffc6b3fd64cd9bfd70bccae7ccb3f": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:39.164843    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "calico-kube-controllers-6896fd456b-hgh64_calico-system(34bc27b5-97af-4d2d-ba36-918655706ec2)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "56f2680f3a48d60b1eae68c831cdc588e29ffc6b3fd64cd9bfd70bccae7ccb3f": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:39.164873    5472 kuberuntime_manager.go:727] createPodSandbox for pod "calico-kube-controllers-6896fd456b-hgh64_calico-system(34bc27b5-97af-4d2d-ba36-918655706ec2)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "56f2680f3a48d60b1eae68c831cdc588e29ffc6b3fd64cd9bfd70bccae7ccb3f": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:39.164973    5472 pod_workers.go:191] Error syncing pod 34bc27b5-97af-4d2d-ba36-918655706ec2 ("calico-kube-controllers-6896fd456b-hgh64_calico-system(34bc27b5-97af-4d2d-ba36-918655706ec2)"), skipping: failed to "CreatePodSandbox" for "calico-kube-controllers-6896fd456b-hgh64_calico-system(34bc27b5-97af-4d2d-ba36-918655706ec2)" with CreatePodSandboxError: "CreatePodSandbox for pod \"calico-kube-controllers-6896fd456b-hgh64_calico-system(34bc27b5-97af-4d2d-ba36-918655706ec2)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"56f2680f3a48d60b1eae68c831cdc588e29ffc6b3fd64cd9bfd70bccae7ccb3f\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:23:39.178680    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0f0ee990b1cdd9bf0a6beb2aeb78c7a4ba858fc0b2cb45849dff7ca3a782bed3": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:39.178747    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0f0ee990b1cdd9bf0a6beb2aeb78c7a4ba858fc0b2cb45849dff7ca3a782bed3": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:39.178778    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "0f0ee990b1cdd9bf0a6beb2aeb78c7a4ba858fc0b2cb45849dff7ca3a782bed3": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:39.178847    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"0f0ee990b1cdd9bf0a6beb2aeb78c7a4ba858fc0b2cb45849dff7ca3a782bed3\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:23:39.211624    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.211673    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.211905    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.211937    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.212187    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.212219    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.212492    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.212516    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.212710    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.212730    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.212901    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.212921    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.213135    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.213157    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.213390    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.213451    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.280192    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.280235    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.280578    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.280609    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.280847    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.280877    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.281084    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.281111    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.281355    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.281393    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:39.281830    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:39.281858    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
I0113 07:23:40.063291    5472 request.go:621] Throttling request took 1.048966294s, request: GET:https://127.0.0.1:6444/apis/policy/v1beta1?timeout=32s
E0113 07:23:43.221020    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.221065    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:43.221285    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.221317    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:43.221564    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.221593    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:43.221839    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.221895    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:43.222132    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.222185    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:43.222382    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.222449    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:43.222716    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.222749    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:43.222986    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.223011    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:43.286977    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.287016    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:43.287312    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.287339    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:43.287653    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.287681    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:43.287910    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.287935    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:43.288210    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.288235    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:43.288763    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:43.288790    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.177640    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6a768d5e97940839f011b8371a231131d4de6a7bd0b76ebf09c788cfde4a4b75": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:44.177686    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6a768d5e97940839f011b8371a231131d4de6a7bd0b76ebf09c788cfde4a4b75": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:44.177707    5472 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6a768d5e97940839f011b8371a231131d4de6a7bd0b76ebf09c788cfde4a4b75": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 07:23:44.177772    5472 pod_workers.go:191] Error syncing pod adc3ce13-bc34-458e-bd0a-125ec83285c9 ("local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-mv8fz_kube-system(adc3ce13-bc34-458e-bd0a-125ec83285c9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"6a768d5e97940839f011b8371a231131d4de6a7bd0b76ebf09c788cfde4a4b75\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 07:23:44.225330    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.225381    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.225619    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.225648    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.225909    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.225935    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.226192    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.226219    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.226471    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.226513    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.226766    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.226794    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.227046    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.227095    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.227321    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.227347    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.289648    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.289703    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.290577    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.290617    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.291145    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.291182    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.291576    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.291630    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.291962    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.292002    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.292305    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.292341    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.293092    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.293136    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.294120    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.294159    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.294521    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.294556    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.294838    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.294871    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.295243    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.295293    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:44.295852    5472 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
E0113 07:23:44.295909    5472 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 07:23:49.616798    5472 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 07:23:51.330669    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "33c46eb4cc05844568d3e30c12e8e5bb948d3a4594b7b46d6ff956e76dbfbee2": cannot find a qualified ippool
E0113 07:23:51.330788    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "calico-kube-controllers-6896fd456b-hgh64_calico-system(34bc27b5-97af-4d2d-ba36-918655706ec2)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "33c46eb4cc05844568d3e30c12e8e5bb948d3a4594b7b46d6ff956e76dbfbee2": cannot find a qualified ippool
E0113 07:23:51.330814    5472 kuberuntime_manager.go:727] createPodSandbox for pod "calico-kube-controllers-6896fd456b-hgh64_calico-system(34bc27b5-97af-4d2d-ba36-918655706ec2)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "33c46eb4cc05844568d3e30c12e8e5bb948d3a4594b7b46d6ff956e76dbfbee2": cannot find a qualified ippool
E0113 07:23:51.330873    5472 pod_workers.go:191] Error syncing pod 34bc27b5-97af-4d2d-ba36-918655706ec2 ("calico-kube-controllers-6896fd456b-hgh64_calico-system(34bc27b5-97af-4d2d-ba36-918655706ec2)"), skipping: failed to "CreatePodSandbox" for "calico-kube-controllers-6896fd456b-hgh64_calico-system(34bc27b5-97af-4d2d-ba36-918655706ec2)" with CreatePodSandboxError: "CreatePodSandbox for pod \"calico-kube-controllers-6896fd456b-hgh64_calico-system(34bc27b5-97af-4d2d-ba36-918655706ec2)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"33c46eb4cc05844568d3e30c12e8e5bb948d3a4594b7b46d6ff956e76dbfbee2\": cannot find a qualified ippool"
E0113 07:23:51.338671    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "088c05197eac8a05caa83385a97dcb1e10767eecc0a6cdede37be0ac02b00275": cannot find a qualified ippool
E0113 07:23:51.338726    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "088c05197eac8a05caa83385a97dcb1e10767eecc0a6cdede37be0ac02b00275": cannot find a qualified ippool
E0113 07:23:51.338752    5472 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "088c05197eac8a05caa83385a97dcb1e10767eecc0a6cdede37be0ac02b00275": cannot find a qualified ippool
E0113 07:23:51.338815    5472 pod_workers.go:191] Error syncing pod ed2a4a84-b961-4d97-b67c-6dd8c645699e ("coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-q6mtc_kube-system(ed2a4a84-b961-4d97-b67c-6dd8c645699e)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"088c05197eac8a05caa83385a97dcb1e10767eecc0a6cdede37be0ac02b00275\": cannot find a qualified ippool"
E0113 07:23:53.359730    5472 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ffbaea325426103418dcc95234a9b0057b09fbd37a9f864cd30e2cdc1e6589bb": cannot find a qualified ippool
E0113 07:23:53.359806    5472 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ffbaea325426103418dcc95234a9b0057b09fbd37a9f864cd30e2cdc1e6589bb": cannot find a qualified ippool
E0113 07:23:53.359838    5472 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "ffbaea325426103418dcc95234a9b0057b09fbd37a9f864cd30e2cdc1e6589bb": cannot find a qualified ippool
E0113 07:23:53.359914    5472 pod_workers.go:191] Error syncing pod 703c21b5-6c33-42f5-a506-424f97703ef5 ("metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-jxcn5_kube-system(703c21b5-6c33-42f5-a506-424f97703ef5)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"ffbaea325426103418dcc95234a9b0057b09fbd37a9f864cd30e2cdc1e6589bb\": cannot find a qualified ippool"
E0113 07:23:56.701587    5472 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
E0113 07:24:03.560787    5472 remote_runtime.go:351] ExecSync a66e3cc7e043c3ebb242d8d5329ddd253fe62230a32600976b718572bfea504f '/bin/calico-node -bird-ready -felix-ready' from runtime service failed: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "53ead353f83e2168fcee5af3759df7f3ba2e326b1494c32675b969e8736e8118": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
E0113 07:24:03.633295    5472 remote_runtime.go:351] ExecSync a66e3cc7e043c3ebb242d8d5329ddd253fe62230a32600976b718572bfea504f '/bin/calico-node -bird-ready -felix-ready' from runtime service failed: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "5d2e1351b2e2d4a51c247c0974f2d7de2c27c779e5c51cd9f8802b2fe74080f4": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
E0113 07:24:03.726434    5472 remote_runtime.go:351] ExecSync a66e3cc7e043c3ebb242d8d5329ddd253fe62230a32600976b718572bfea504f '/bin/calico-node -bird-ready -felix-ready' from runtime service failed: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "32699c408027208491f772c7600fb84cd68653fbbea687f572dd6cb26ff5b95a": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
hickersonj commented 3 years ago

Here is a comparison of logs from the output of the non-working first and working last. I notcied that the flexvolume driver is having errors which are not present in the working calico-node.

Output of k3s logs just after installing the caliconetwork from the custom-resources:

I0113 08:31:01.966937   15994 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"calico-system", Name:"calico-typha", UID:"3cb88a90-7261-47ff-b92d-144c3df4b5d5", APIVersion:"apps/v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set calico-typha-5756748789 to 1
I0113 08:31:01.980471   15994 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"calico-system", Name:"calico-typha-5756748789", UID:"77dba15e-8adf-4060-a5d8-0f626072f740", APIVersion:"apps/v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: calico-typha-5756748789-wkrzb
I0113 08:31:01.989787   15994 topology_manager.go:233] [topologymanager] Topology Admit Handler
I0113 08:31:02.018128   15994 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"", Name:"calico-typha", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FailedToCreateEndpoint' Failed to create endpoint for service calico-system/calico-typha: endpoints "calico-typha" already exists
I0113 08:31:02.045679   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "typha-ca" (UniqueName: "kubernetes.io/configmap/f514ef8b-5835-42ed-ba6f-5f45ffe0d01d-typha-ca") pod "calico-typha-5756748789-wkrzb" (UID: "f514ef8b-5835-42ed-ba6f-5f45ffe0d01d")
I0113 08:31:02.045754   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-typha-token-z8hht" (UniqueName: "kubernetes.io/secret/f514ef8b-5835-42ed-ba6f-5f45ffe0d01d-calico-typha-token-z8hht") pod "calico-typha-5756748789-wkrzb" (UID: "f514ef8b-5835-42ed-ba6f-5f45ffe0d01d")
I0113 08:31:02.045794   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "typha-certs" (UniqueName: "kubernetes.io/secret/f514ef8b-5835-42ed-ba6f-5f45ffe0d01d-typha-certs") pod "calico-typha-5756748789-wkrzb" (UID: "f514ef8b-5835-42ed-ba6f-5f45ffe0d01d")
I0113 08:31:02.122548   15994 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
I0113 08:31:02.259673   15994 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0113 08:31:02.280535   15994 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0113 08:31:02.289324   15994 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"calico-system", Name:"calico-kube-controllers", UID:"82eb4f4a-e11c-4140-b2bb-5cdb71a5c3e6", APIVersion:"apps/v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set calico-kube-controllers-6896fd456b to 1
I0113 08:31:02.294162   15994 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"calico-system", Name:"calico-node", UID:"f754bf86-2837-43e6-be86-5baac2a530e4", APIVersion:"apps/v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: calico-node-9fkdq
I0113 08:31:02.311730   15994 topology_manager.go:233] [topologymanager] Topology Admit Handler
I0113 08:31:02.315349   15994 controller.go:606] quota admission added evaluator for: namespaces
E0113 08:31:02.336750   15994 daemon_controller.go:292] calico-system/calico-node failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node", GenerateName:"", Namespace:"calico-system", SelfLink:"/apis/apps/v1/namespaces/calico-system/daemonsets/calico-node", UID:"f754bf86-2837-43e6-be86-5baac2a530e4", ResourceVersion:"667", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63746123462, loc:(*time.Location)(0x6fe3220)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"operator.tigera.io/v1", Kind:"Installation", Name:"default", UID:"ff93c0e2-9116-43cb-be46-4fec9ca625a8", Controller:(*bool)(0xc00a430ee7), BlockOwnerDeletion:(*bool)(0xc00a430ee8)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"k3s", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc007a9d580), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc007a9d5a0)}, v1.ManagedFieldsEntry{Manager:"operator", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc007a9d5c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc007a9d5e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc007a9d600), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-node"}, Annotations:map[string]string{"hash.operator.tigera.io/cni-config":"8cdc037baa5a288802a5754d2ab6c74d9b8b3ada", "hash.operator.tigera.io/node-cert":"355a84e5b81adc82b52b8bab912da98081cbf753", "hash.operator.tigera.io/typha-ca":"5cd88696eeeb0d56862db17f7cbf5f4b9dd5e941"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a9d620), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"var-run-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a9d640), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"var-lib-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a9d660), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a9d680), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"policysync", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a9d6a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"typha-ca", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00c218bc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"felix-certs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00c218c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"cni-bin-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a9d6c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"cni-net-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a9d6e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"cni-log-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a9d700), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"flexvol-driver-host", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a9d720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"flexvol-driver", Image:"docker.io/calico/pod2daemon-flexvol:v3.17.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"flexvol-driver-host", ReadOnly:false, MountPath:"/host/driver", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00802c2d0), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"install-cni", Image:"docker.io/calico/cni:v3.17.1", Command:[]string{"/opt/cni/bin/install"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"CNI_CONF_NAME", Value:"10-calico.conflist", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"SLEEP", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CNI_NET_DIR", Value:"/etc/cni/net.d", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CNI_NETWORK_CONFIG", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007a9d860)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.43.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-bin-dir", ReadOnly:false, MountPath:"/host/opt/cni/bin", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-net-dir", ReadOnly:false, MountPath:"/host/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00802c410), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"calico-node", Image:"docker.io/calico/node:v3.17.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"WAIT_FOR_DATASTORE", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CLUSTER_TYPE", Value:"k8s,operator,bgp", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_DISABLE_FILE_LOGGING", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_DEFAULTENDPOINTTOHOSTACTION", Value:"ACCEPT", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_HEALTHENABLED", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"NODENAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007a9d740)}, v1.EnvVar{Name:"NAMESPACE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007a9d7a0)}, v1.EnvVar{Name:"FELIX_TYPHAK8SNAMESPACE", Value:"calico-system", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHAK8SSERVICENAME", Value:"calico-typha", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACAFILE", Value:"/typha-ca/caBundle", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACERTFILE", Value:"/felix-certs/cert.crt", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHAKEYFILE", Value:"/felix-certs/key.key", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACN", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007a9d7e0)}, v1.EnvVar{Name:"FELIX_TYPHAURISAN", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007a9d800)}, v1.EnvVar{Name:"CALICO_IPV4POOL_CIDR", Value:"20.28.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_VXLAN", Value:"CrossSubnet", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_BLOCK_SIZE", Value:"26", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_NODE_SELECTOR", Value:"all()", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_NETWORKING_BACKEND", Value:"bird", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP", Value:"autodetect", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP_AUTODETECTION_METHOD", Value:"first-found", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP6", Value:"none", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPV6SUPPORT", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPTABLESBACKEND", Value:"auto", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.43.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-run-calico", ReadOnly:false, MountPath:"/var/run/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-lib-calico", ReadOnly:false, MountPath:"/var/lib/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"policysync", ReadOnly:false, MountPath:"/var/run/nodeagent", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"typha-ca", ReadOnly:true, MountPath:"/typha-ca", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"felix-certs", ReadOnly:true, MountPath:"/felix-certs", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-log-dir", ReadOnly:true, MountPath:"/var/log/calico/cni", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0120d3ad0), ReadinessProbe:(*v1.Probe)(0xc0120d3b00), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00802c230), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00a431448), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-node", DeprecatedServiceAccount:"calico-node", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc007c81500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"calico-priority", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00a0db1a0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00a431698)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "calico-node": the object has been modified; please apply your changes to the latest version and try again
I0113 08:31:02.346353   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/3266d41c-8e38-4431-9273-79deda4bc1b2-xtables-lock") pod "calico-node-9fkdq" (UID: "3266d41c-8e38-4431-9273-79deda4bc1b2")
I0113 08:31:02.346516   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-node-token-fvm97" (UniqueName: "kubernetes.io/secret/3266d41c-8e38-4431-9273-79deda4bc1b2-calico-node-token-fvm97") pod "calico-node-9fkdq" (UID: "3266d41c-8e38-4431-9273-79deda4bc1b2")
I0113 08:31:02.346657   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/3266d41c-8e38-4431-9273-79deda4bc1b2-var-run-calico") pod "calico-node-9fkdq" (UID: "3266d41c-8e38-4431-9273-79deda4bc1b2")
I0113 08:31:02.346707   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/3266d41c-8e38-4431-9273-79deda4bc1b2-var-lib-calico") pod "calico-node-9fkdq" (UID: "3266d41c-8e38-4431-9273-79deda4bc1b2")
I0113 08:31:02.346804   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/3266d41c-8e38-4431-9273-79deda4bc1b2-cni-bin-dir") pod "calico-node-9fkdq" (UID: "3266d41c-8e38-4431-9273-79deda4bc1b2")
I0113 08:31:02.346842   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/3266d41c-8e38-4431-9273-79deda4bc1b2-cni-net-dir") pod "calico-node-9fkdq" (UID: "3266d41c-8e38-4431-9273-79deda4bc1b2")
I0113 08:31:02.346881   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-log-dir" (UniqueName: "kubernetes.io/host-path/3266d41c-8e38-4431-9273-79deda4bc1b2-cni-log-dir") pod "calico-node-9fkdq" (UID: "3266d41c-8e38-4431-9273-79deda4bc1b2")
I0113 08:31:02.346919   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/3266d41c-8e38-4431-9273-79deda4bc1b2-lib-modules") pod "calico-node-9fkdq" (UID: "3266d41c-8e38-4431-9273-79deda4bc1b2")
I0113 08:31:02.346953   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "policysync" (UniqueName: "kubernetes.io/host-path/3266d41c-8e38-4431-9273-79deda4bc1b2-policysync") pod "calico-node-9fkdq" (UID: "3266d41c-8e38-4431-9273-79deda4bc1b2")
I0113 08:31:02.346988   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "typha-ca" (UniqueName: "kubernetes.io/configmap/3266d41c-8e38-4431-9273-79deda4bc1b2-typha-ca") pod "calico-node-9fkdq" (UID: "3266d41c-8e38-4431-9273-79deda4bc1b2")
I0113 08:31:02.347022   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "felix-certs" (UniqueName: "kubernetes.io/secret/3266d41c-8e38-4431-9273-79deda4bc1b2-felix-certs") pod "calico-node-9fkdq" (UID: "3266d41c-8e38-4431-9273-79deda4bc1b2")
I0113 08:31:02.347057   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvol-driver-host" (UniqueName: "kubernetes.io/host-path/3266d41c-8e38-4431-9273-79deda4bc1b2-flexvol-driver-host") pod "calico-node-9fkdq" (UID: "3266d41c-8e38-4431-9273-79deda4bc1b2")
E0113 08:31:02.450926   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:02.450949   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:02.450984   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:02.452086   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:02.452110   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:02.452132   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.162639   15994 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "2d92a0645e60085ef3cded9a266dacf3b507bce5a0549f3a6a3b227190450be2": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 08:31:03.162700   15994 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-lgjz9_kube-system(f60cf3e1-ef9a-4385-9910-1a72230688a3)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "2d92a0645e60085ef3cded9a266dacf3b507bce5a0549f3a6a3b227190450be2": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 08:31:03.162726   15994 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-lgjz9_kube-system(f60cf3e1-ef9a-4385-9910-1a72230688a3)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "2d92a0645e60085ef3cded9a266dacf3b507bce5a0549f3a6a3b227190450be2": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 08:31:03.162781   15994 pod_workers.go:191] Error syncing pod f60cf3e1-ef9a-4385-9910-1a72230688a3 ("local-path-provisioner-6d59f47c7-lgjz9_kube-system(f60cf3e1-ef9a-4385-9910-1a72230688a3)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-lgjz9_kube-system(f60cf3e1-ef9a-4385-9910-1a72230688a3)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-lgjz9_kube-system(f60cf3e1-ef9a-4385-9910-1a72230688a3)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"2d92a0645e60085ef3cded9a266dacf3b507bce5a0549f3a6a3b227190450be2\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 08:31:03.216936   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.216958   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.216980   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.217248   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.217260   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.217283   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.217640   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.217680   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.217712   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.218022   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.218035   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.218049   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.218289   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.218300   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.218313   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.218595   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.218609   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.218625   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.218862   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.218874   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.218893   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.219166   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.219179   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.219193   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.249547   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.249572   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.249598   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.249923   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.249938   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.249956   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.250278   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.250299   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.250317   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.250629   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.250643   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.250658   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.250938   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.250950   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.250971   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.251257   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.251270   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.251285   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.251683   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.251702   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.251723   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.251999   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.252015   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.252032   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.252313   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.252326   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.252341   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.252687   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.252701   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.252715   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.253028   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.253045   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.253068   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.253388   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.253404   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.253435   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
I0113 08:31:03.297875   15994 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"calico-system", Name:"calico-kube-controllers-6896fd456b", UID:"64b55b5c-1069-439b-af2f-77b5abf1a705", APIVersion:"apps/v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: calico-kube-controllers-6896fd456b-zcvk7
I0113 08:31:03.305214   15994 topology_manager.go:233] [topologymanager] Topology Admit Handler
E0113 08:31:03.309796   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.309816   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.309842   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.319400   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.319438   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.319462   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.319845   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.319862   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.319881   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.320129   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.320142   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.320156   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.320405   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.320444   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.320459   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.350526   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.350568   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.350596   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
I0113 08:31:03.350644   15994 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-kube-controllers-token-kwhbm" (UniqueName: "kubernetes.io/secret/5bb453d8-67ce-4eb2-95a8-c7d5b49d161f-calico-kube-controllers-token-kwhbm") pod "calico-kube-controllers-6896fd456b-zcvk7" (UID: "5bb453d8-67ce-4eb2-95a8-c7d5b49d161f")
E0113 08:31:03.350999   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.351012   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.351029   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.351304   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.351315   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.351352   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.451304   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.451326   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.451358   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.451739   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.451756   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.451777   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.452086   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.452102   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.452123   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.452453   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.452470   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.452513   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.452851   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.452866   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.452883   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
E0113 08:31:03.455671   15994 driver-call.go:266] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0113 08:31:03.455692   15994 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0113 08:31:03.455711   15994 plugins.go:729] Error dynamically probing plugins: Error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input
I0113 08:31:03.458982   15994 log.go:172] http: TLS handshake error from 172.17.0.1:54060: remote error: tls: bad certificate
E0113 08:31:03.629705   15994 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "abaed0f4cd1178bfb5f79f4c5ae4d7d076f1911c363c23a660f7e0f28cbe74ad": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 08:31:03.629770   15994 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "calico-kube-controllers-6896fd456b-zcvk7_calico-system(5bb453d8-67ce-4eb2-95a8-c7d5b49d161f)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "abaed0f4cd1178bfb5f79f4c5ae4d7d076f1911c363c23a660f7e0f28cbe74ad": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 08:31:03.629801   15994 kuberuntime_manager.go:727] createPodSandbox for pod "calico-kube-controllers-6896fd456b-zcvk7_calico-system(5bb453d8-67ce-4eb2-95a8-c7d5b49d161f)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "abaed0f4cd1178bfb5f79f4c5ae4d7d076f1911c363c23a660f7e0f28cbe74ad": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 08:31:03.629887   15994 pod_workers.go:191] Error syncing pod 5bb453d8-67ce-4eb2-95a8-c7d5b49d161f ("calico-kube-controllers-6896fd456b-zcvk7_calico-system(5bb453d8-67ce-4eb2-95a8-c7d5b49d161f)"), skipping: failed to "CreatePodSandbox" for "calico-kube-controllers-6896fd456b-zcvk7_calico-system(5bb453d8-67ce-4eb2-95a8-c7d5b49d161f)" with CreatePodSandboxError: "CreatePodSandbox for pod \"calico-kube-controllers-6896fd456b-zcvk7_calico-system(5bb453d8-67ce-4eb2-95a8-c7d5b49d161f)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"abaed0f4cd1178bfb5f79f4c5ae4d7d076f1911c363c23a660f7e0f28cbe74ad\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 08:31:05.164633   15994 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d37f48ba58e68d737f6c93f44b6dbf9afdb7b25898eb7107403bfe9a4937d68a": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 08:31:05.164691   15994 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-7566d596c8-4qzwk_kube-system(884cf940-3edb-49b1-9be3-5fd17cc667d9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d37f48ba58e68d737f6c93f44b6dbf9afdb7b25898eb7107403bfe9a4937d68a": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 08:31:05.164718   15994 kuberuntime_manager.go:727] createPodSandbox for pod "metrics-server-7566d596c8-4qzwk_kube-system(884cf940-3edb-49b1-9be3-5fd17cc667d9)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d37f48ba58e68d737f6c93f44b6dbf9afdb7b25898eb7107403bfe9a4937d68a": failed to find plugin "loopback" in path [/opt/cni/bin]
E0113 08:31:05.164779   15994 pod_workers.go:191] Error syncing pod 884cf940-3edb-49b1-9be3-5fd17cc667d9 ("metrics-server-7566d596c8-4qzwk_kube-system(884cf940-3edb-49b1-9be3-5fd17cc667d9)"), skipping: failed to "CreatePodSandbox" for "metrics-server-7566d596c8-4qzwk_kube-system(884cf940-3edb-49b1-9be3-5fd17cc667d9)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-7566d596c8-4qzwk_kube-system(884cf940-3edb-49b1-9be3-5fd17cc667d9)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"d37f48ba58e68d737f6c93f44b6dbf9afdb7b25898eb7107403bfe9a4937d68a\": failed to find plugin \"loopback\" in path [/opt/cni/bin]"
E0113 08:31:05.180093   15994 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0113 08:31:13.340687   15994 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "65b72f1b7a6931b9528014f1519d4d55959bb9d2c4e03c2d4cda21a60df32430": cannot find a qualified ippool
E0113 08:31:13.340744   15994 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-8655855d6-5btxp_kube-system(e3e318de-f78f-4c08-82e8-697dc55c7a4c)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "65b72f1b7a6931b9528014f1519d4d55959bb9d2c4e03c2d4cda21a60df32430": cannot find a qualified ippool
E0113 08:31:13.340789   15994 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-8655855d6-5btxp_kube-system(e3e318de-f78f-4c08-82e8-697dc55c7a4c)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "65b72f1b7a6931b9528014f1519d4d55959bb9d2c4e03c2d4cda21a60df32430": cannot find a qualified ippool
E0113 08:31:13.340863   15994 pod_workers.go:191] Error syncing pod e3e318de-f78f-4c08-82e8-697dc55c7a4c ("coredns-8655855d6-5btxp_kube-system(e3e318de-f78f-4c08-82e8-697dc55c7a4c)"), skipping: failed to "CreatePodSandbox" for "coredns-8655855d6-5btxp_kube-system(e3e318de-f78f-4c08-82e8-697dc55c7a4c)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-8655855d6-5btxp_kube-system(e3e318de-f78f-4c08-82e8-697dc55c7a4c)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"65b72f1b7a6931b9528014f1519d4d55959bb9d2c4e03c2d4cda21a60df32430\": cannot find a qualified ippool"
E0113 08:31:15.326682   15994 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3e8395b35f8b92a47af3120153637b519c91148cc95d3f2d582331b08a72c422": cannot find a qualified ippool
E0113 08:31:15.326730   15994 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-6d59f47c7-lgjz9_kube-system(f60cf3e1-ef9a-4385-9910-1a72230688a3)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3e8395b35f8b92a47af3120153637b519c91148cc95d3f2d582331b08a72c422": cannot find a qualified ippool
E0113 08:31:15.326749   15994 kuberuntime_manager.go:727] createPodSandbox for pod "local-path-provisioner-6d59f47c7-lgjz9_kube-system(f60cf3e1-ef9a-4385-9910-1a72230688a3)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3e8395b35f8b92a47af3120153637b519c91148cc95d3f2d582331b08a72c422": cannot find a qualified ippool
E0113 08:31:15.326807   15994 pod_workers.go:191] Error syncing pod f60cf3e1-ef9a-4385-9910-1a72230688a3 ("local-path-provisioner-6d59f47c7-lgjz9_kube-system(f60cf3e1-ef9a-4385-9910-1a72230688a3)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-6d59f47c7-lgjz9_kube-system(f60cf3e1-ef9a-4385-9910-1a72230688a3)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-6d59f47c7-lgjz9_kube-system(f60cf3e1-ef9a-4385-9910-1a72230688a3)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"3e8395b35f8b92a47af3120153637b519c91148cc95d3f2d582331b08a72c422\": cannot find a qualified ippool"
E0113 08:31:16.309676   15994 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8e80c17d067bdce84bb61d3914ad072db811b25c1d3afb79b09ef0974c38983e": cannot find a qualified ippool
E0113 08:31:16.309740   15994 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "calico-kube-controllers-6896fd456b-zcvk7_calico-system(5bb453d8-67ce-4eb2-95a8-c7d5b49d161f)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8e80c17d067bdce84bb61d3914ad072db811b25c1d3afb79b09ef0974c38983e": cannot find a qualified ippool
E0113 08:31:16.309771   15994 kuberuntime_manager.go:727] createPodSandbox for pod "calico-kube-controllers-6896fd456b-zcvk7_calico-system(5bb453d8-67ce-4eb2-95a8-c7d5b49d161f)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8e80c17d067bdce84bb61d3914ad072db811b25c1d3afb79b09ef0974c38983e": cannot find a qualified ippool
E0113 08:31:16.309846   15994 pod_workers.go:191] Error syncing pod 5bb453d8-67ce-4eb2-95a8-c7d5b49d161f ("calico-kube-controllers-6896fd456b-zcvk7_calico-system(5bb453d8-67ce-4eb2-95a8-c7d5b49d161f)"), skipping: failed to "CreatePodSandbox" for "calico-kube-controllers-6896fd456b-zcvk7_calico-system(5bb453d8-67ce-4eb2-95a8-c7d5b49d161f)" with CreatePodSandboxError: "CreatePodSandbox for pod \"calico-kube-controllers-6896fd456b-zcvk7_calico-system(5bb453d8-67ce-4eb2-95a8-c7d5b49d161f)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"8e80c17d067bdce84bb61d3914ad072db811b25c1d3afb79b09ef0974c38983e\": cannot find a qualified ippool"
E0113 08:31:18.981930   15994 remote_runtime.go:351] ExecSync 0c66d1e97ed421b4566ec2a357916205fdee911ae847793c99bac85bf175317a '/bin/calico-node -bird-ready -felix-ready' from runtime service failed: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "6f466ae63b9d8224f6fcbf772bad5b1ba50b2d7af34cbb5c8b8669e91ccec74c": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
I0113 08:31:19.174832   15994 request.go:621] Throttling request took 1.0489102s, request: GET:https://127.0.0.1:6444/apis/discovery.k8s.io/v1beta1?timeout=32s
E0113 08:31:19.305849   15994 remote_runtime.go:351] ExecSync 0c66d1e97ed421b4566ec2a357916205fdee911ae847793c99bac85bf175317a '/bin/calico-node -bird-ready -felix-ready' from runtime service failed: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "9809ffc3cca2796d2dfc81a7eec0a22c5bf5ea4a19c0e9487b7804fe4db21a99": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
E0113 08:31:19.705733   15994 remote_runtime.go:351] ExecSync 0c66d1e97ed421b4566ec2a357916205fdee911ae847793c99bac85bf175317a '/bin/calico-node -bird-ready -felix-ready' from runtime service failed: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "4c22a88a07731956333da98be729d284aced424abc43ced623221f01e35680a5": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
W0113 08:31:20.025762   15994 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0113 08:31:25.280984   15994 log.go:172] http: TLS handshake error from 172.17.0.1:54188: remote error: tls: bad certificate
I0113 08:31:26.058316   15994 log.go:172] http: TLS handshake error from 172.17.0.1:54190: remote error: tls: bad certificate
W0113 08:31:26.551269   15994 endpointslice_controller.go:260] Error syncing endpoint slices for service "kube-system/metrics-server", retrying. Error: Error deleting metrics-server-9nfqt EndpointSlice for Service kube-system/metrics-server: endpointslices.discovery.k8s.io "metrics-server-9nfqt" not found
I0113 08:31:26.551372   15994 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"metrics-server", UID:"30c008be-840a-401b-9e30-b9995dd52f97", APIVersion:"v1", ResourceVersion:"195", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/metrics-server: Error deleting metrics-server-9nfqt EndpointSlice for Service kube-system/metrics-server: endpointslices.discovery.k8s.io "metrics-server-9nfqt" not found
E0113 08:31:28.733670   15994 remote_runtime.go:351] ExecSync 0c66d1e97ed421b4566ec2a357916205fdee911ae847793c99bac85bf175317a '/bin/calico-node -bird-ready -felix-ready' from runtime service failed: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "31d03f8a431ec7cb798bba5370ddf5c14d29cb3dd6ec28f8606aa9fed1ad4980": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
E0113 08:31:28.829871   15994 remote_runtime.go:351] ExecSync 0c66d1e97ed421b4566ec2a357916205fdee911ae847793c99bac85bf175317a '/bin/calico-node -bird-ready -felix-ready' from runtime service failed: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "bda922757a9e1ac160397f4a27d8ade612ecb97ad786580c7b42d0457e9e5db0": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown
E0113 08:31:28.927199   15994 remote_runtime.go:351] ExecSync 0c66d1e97ed421b4566ec2a357916205fdee911ae847793c99bac85bf175317a '/bin/calico-node -bird-ready -felix-ready' from runtime service failed: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "4292b7e095c78c5a6cd0b3439124b915236b4083950ab88794949fb828bf5b80": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/calico-node\": stat /bin/calico-node: no such file or directory": unknown

Output of k3s log of working node. Here is the same log from a working system just after installing the caliconetwork:

Jan 13 00:53:26 endor k3s[24742]: time="2021-01-13T00:53:26.489837067-08:00" level=info msg="Cluster-Http-Server 2021/01/13 00:53:26 http: TLS handshake error from 10.139.16.39:52034: remote error: tls: bad certificate"
Jan 13 00:53:26 endor k3s[24742]: time="2021-01-13T00:53:26.574443516-08:00" level=info msg="Cluster-Http-Server 2021/01/13 00:53:26 http: TLS handshake error from 10.139.16.39:51158: remote error: tls: bad certificate"
Jan 13 00:53:26 endor k3s[24742]: E0113 00:53:26.577939   24742 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "830078d66fb32dea03be701da5ff0f03e6f0abb5c74327306bad86cd2c6aa659": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:26 endor k3s[24742]: E0113 00:53:26.578043   24742 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "metrics-server-86cbb8457f-8znvw_kube-system(7add08c0-8e96-4589-88b7-5d465aa725b0)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "830078d66fb32dea03be701da5ff0f03e6f0abb5c74327306bad86cd2c6aa659": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:26 endor k3s[24742]: E0113 00:53:26.578124   24742 kuberuntime_manager.go:755] createPodSandbox for pod "metrics-server-86cbb8457f-8znvw_kube-system(7add08c0-8e96-4589-88b7-5d465aa725b0)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "830078d66fb32dea03be701da5ff0f03e6f0abb5c74327306bad86cd2c6aa659": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:26 endor k3s[24742]: E0113 00:53:26.578243   24742 pod_workers.go:191] Error syncing pod 7add08c0-8e96-4589-88b7-5d465aa725b0 ("metrics-server-86cbb8457f-8znvw_kube-system(7add08c0-8e96-4589-88b7-5d465aa725b0)"), skipping: failed to "CreatePodSandbox" for "metrics-server-86cbb8457f-8znvw_kube-system(7add08c0-8e96-4589-88b7-5d465aa725b0)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-86cbb8457f-8znvw_kube-system(7add08c0-8e96-4589-88b7-5d465aa725b0)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"830078d66fb32dea03be701da5ff0f03e6f0abb5c74327306bad86cd2c6aa659\": error getting ClusterInformation: Get \"https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": x509: certificate signed by unknown authority"
Jan 13 00:53:26 endor k3s[24742]: I0113 00:53:26.691976   24742 request.go:655] Throttling request took 1.047994191s, request: GET:https://127.0.0.1:6444/apis/coordination.k8s.io/v1beta1?timeout=32s
Jan 13 00:53:27 endor k3s[24742]: W0113 00:53:27.742805   24742 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.154609   24742 event.go:291] "Event occurred" object="calico-system/calico-typha" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set calico-typha-744747978d to 1"
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.166772   24742 event.go:291] "Event occurred" object="calico-system/calico-typha-744747978d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-typha-744747978d-c2mv9"
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.178872   24742 topology_manager.go:187] [topologymanager] Topology Admit Handler
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.186620   24742 event.go:291] "Event occurred" object="calico-typha" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service calico-system/calico-typha: endpoints \"calico-typha\" already exists"
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.271951   24742 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.313140   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "typha-ca" (UniqueName: "kubernetes.io/configmap/434956b9-8602-4043-87b1-e484ecf8e1f1-typha-ca") pod "calico-typha-744747978d-c2mv9" (UID: "434956b9-8602-4043-87b1-e484ecf8e1f1")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.313193   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "typha-certs" (UniqueName: "kubernetes.io/secret/434956b9-8602-4043-87b1-e484ecf8e1f1-typha-certs") pod "calico-typha-744747978d-c2mv9" (UID: "434956b9-8602-4043-87b1-e484ecf8e1f1")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.313231   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-typha-token-b58bh" (UniqueName: "kubernetes.io/secret/434956b9-8602-4043-87b1-e484ecf8e1f1-calico-typha-token-b58bh") pod "calico-typha-744747978d-c2mv9" (UID: "434956b9-8602-4043-87b1-e484ecf8e1f1")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.437384   24742 controller.go:606] quota admission added evaluator for: daemonsets.apps
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.453040   24742 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.460185   24742 event.go:291] "Event occurred" object="calico-system/calico-node" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-node-2khxz"
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.465811   24742 topology_manager.go:187] [topologymanager] Topology Admit Handler
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.470548   24742 event.go:291] "Event occurred" object="calico-system/calico-kube-controllers" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set calico-kube-controllers-7c87576bb9 to 1"
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.475263   24742 event.go:291] "Event occurred" object="calico-system/calico-kube-controllers-7c87576bb9" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-kube-controllers-7c87576bb9-98f6j"
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.480311   24742 topology_manager.go:187] [topologymanager] Topology Admit Handler
Jan 13 00:53:28 endor k3s[24742]: E0113 00:53:28.486896   24742 daemon_controller.go:320] calico-system/calico-node failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node", GenerateName:"", Namespace:"calico-system", SelfLink:"", UID:"52916e50-bc76-48f7-8eaf-dda0b0909bc4", ResourceVersion:"710", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63746124808, loc:(*time.Location)(0x7019220)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"operator.tigera.io/v1", Kind:"Installation", Name:"default", UID:"dd8ba0c7-bb36-4693-a635-eff5530a34c9", Controller:(*bool)(0xc00a495e37), BlockOwnerDeletion:(*bool)(0xc00a495e38)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"operator", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc007971900), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc007971920)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc007971940), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-node"}, Annotations:map[string]string{"hash.operator.tigera.io/cni-config":"8cdc037baa5a288802a5754d2ab6c74d9b8b3ada", "hash.operator.tigera.io/node-cert":"72e8dfafc536c45892a9e77f613dba44fae138e5", "hash.operator.tigera.io/typha-ca":"9af9b9cb5a5e74f987fd9de8c16b7d315e453d23"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007971960), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"var-run-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007971980), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"var-lib-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0079719a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0079719c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"policysync", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0079719e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"typha-ca", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc007862e80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"felix-certs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc007862f00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-bin-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007971a00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-net-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007971a20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-log-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007971a40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"flexvol-driver-host", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007971a60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"flexvol-driver", Image:"docker.io/calico/pod2daemon-flexvol:v3.17.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"flexvol-driver-host", ReadOnly:false, MountPath:"/host/driver", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00b78eba0), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"install-cni", Image:"docker.io/calico/cni:v3.17.1", Command:[]string{"/opt/cni/bin/install"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"CNI_CONF_NAME", Value:"10-calico.conflist", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"SLEEP", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CNI_NET_DIR", Value:"/etc/cni/net.d", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CNI_NETWORK_CONFIG", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007971b80)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.43.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-bin-dir", ReadOnly:false, MountPath:"/host/opt/cni/bin", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-net-dir", ReadOnly:false, MountPath:"/host/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00b78ec00), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"calico-node", Image:"docker.io/calico/node:v3.17.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"WAIT_FOR_DATASTORE", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CLUSTER_TYPE", Value:"k8s,operator,bgp", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_DISABLE_FILE_LOGGING", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_DEFAULTENDPOINTTOHOSTACTION", Value:"ACCEPT", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_HEALTHENABLED", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"NODENAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007971a80)}, v1.EnvVar{Name:"NAMESPACE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007971ac0)}, v1.EnvVar{Name:"FELIX_TYPHAK8SNAMESPACE", Value:"calico-system", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHAK8SSERVICENAME", Value:"calico-typha", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACAFILE", Value:"/typha-ca/caBundle", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACERTFILE", Value:"/felix-certs/cert.crt", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHAKEYFILE", Value:"/felix-certs/key.key", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACN", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007971b00)}, v1.EnvVar{Name:"FELIX_TYPHAURISAN", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007971b20)}, v1.EnvVar{Name:"CALICO_IPV4POOL_CIDR", Value:"192.168.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_VXLAN", Value:"CrossSubnet", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_BLOCK_SIZE", Value:"26", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_NODE_SELECTOR", Value:"all()", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_NETWORKING_BACKEND", Value:"bird", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP", Value:"autodetect", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP_AUTODETECTION_METHOD", Value:"first-found", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP6", Value:"none", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPV6SUPPORT", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPTABLESBACKEND", Value:"auto", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.43.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-run-calico", ReadOnly:false, MountPath:"/var/run/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-lib-calico", ReadOnly:false, MountPath:"/var/lib/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"policysync", ReadOnly:false, MountPath:"/var/run/nodeagent", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"typha-ca", ReadOnly:true, MountPath:"/typha-ca", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"felix-certs", ReadOnly:true, MountPath:"/felix-certs", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-log-dir", ReadOnly:true, MountPath:"/var/log/calico/cni", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc00a535710), ReadinessProbe:(*v1.Probe)(0xc00a535740), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00b78eb40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00f820398), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-node", DeprecatedServiceAccount:"calico-node", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000974ee0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"calico-priority", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00a161390)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00f8204c8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "calico-node": the object has been modified; please apply your changes to the latest version and try again
Jan 13 00:53:28 endor k3s[24742]: E0113 00:53:28.502506   24742 daemon_controller.go:320] calico-system/calico-node failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-node", GenerateName:"", Namespace:"calico-system", SelfLink:"", UID:"52916e50-bc76-48f7-8eaf-dda0b0909bc4", ResourceVersion:"722", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63746124808, loc:(*time.Location)(0x7019220)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"operator.tigera.io/v1", Kind:"Installation", Name:"default", UID:"dd8ba0c7-bb36-4693-a635-eff5530a34c9", Controller:(*bool)(0xc00db49b77), BlockOwnerDeletion:(*bool)(0xc00db49b78)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"k3s", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc007a829e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc007a82a00)}, v1.ManagedFieldsEntry{Manager:"operator", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc007a82a60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc007a82a80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc007a82aa0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-node"}, Annotations:map[string]string{"hash.operator.tigera.io/cni-config":"8cdc037baa5a288802a5754d2ab6c74d9b8b3ada", "hash.operator.tigera.io/node-cert":"72e8dfafc536c45892a9e77f613dba44fae138e5", "hash.operator.tigera.io/typha-ca":"9af9b9cb5a5e74f987fd9de8c16b7d315e453d23"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a82ac0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"var-run-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a82ae0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"var-lib-calico", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a82b00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a82b20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"policysync", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a82b40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"typha-ca", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc003191380), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"felix-certs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0031913c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-bin-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a82b60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-net-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a82b80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"cni-log-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a82ba0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"flexvol-driver-host", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc007a82bc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"flexvol-driver", Image:"docker.io/calico/pod2daemon-flexvol:v3.17.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"flexvol-driver-host", ReadOnly:false, MountPath:"/host/driver", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00ed33260), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"install-cni", Image:"docker.io/calico/cni:v3.17.1", Command:[]string{"/opt/cni/bin/install"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"CNI_CONF_NAME", Value:"10-calico.conflist", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"SLEEP", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CNI_NET_DIR", Value:"/etc/cni/net.d", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CNI_NETWORK_CONFIG", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007a82d20)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.43.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-bin-dir", ReadOnly:false, MountPath:"/host/opt/cni/bin", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-net-dir", ReadOnly:false, MountPath:"/host/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00ed332c0), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"calico-node", Image:"docker.io/calico/node:v3.17.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"WAIT_FOR_DATASTORE", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CLUSTER_TYPE", Value:"k8s,operator,bgp", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_DISABLE_FILE_LOGGING", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_DEFAULTENDPOINTTOHOSTACTION", Value:"ACCEPT", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_HEALTHENABLED", Value:"true", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"NODENAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007a82be0)}, v1.EnvVar{Name:"NAMESPACE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007a82c20)}, v1.EnvVar{Name:"FELIX_TYPHAK8SNAMESPACE", Value:"calico-system", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHAK8SSERVICENAME", Value:"calico-typha", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACAFILE", Value:"/typha-ca/caBundle", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACERTFILE", Value:"/felix-certs/cert.crt", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHAKEYFILE", Value:"/felix-certs/key.key", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_TYPHACN", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007a82c80)}, v1.EnvVar{Name:"FELIX_TYPHAURISAN", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc007a82ca0)}, v1.EnvVar{Name:"CALICO_IPV4POOL_CIDR", Value:"192.168.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_VXLAN", Value:"CrossSubnet", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_BLOCK_SIZE", Value:"26", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_IPV4POOL_NODE_SELECTOR", Value:"all()", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"CALICO_NETWORKING_BACKEND", Value:"bird", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP", Value:"autodetect", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP_AUTODETECTION_METHOD", Value:"first-found", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"IP6", Value:"none", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPV6SUPPORT", Value:"false", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"FELIX_IPTABLESBACKEND", Value:"auto", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_HOST", Value:"10.43.0.1", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"KUBERNETES_SERVICE_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-run-calico", ReadOnly:false, MountPath:"/var/run/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"var-lib-calico", ReadOnly:false, MountPath:"/var/lib/calico", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"policysync", ReadOnly:false, MountPath:"/var/run/nodeagent", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"typha-ca", ReadOnly:true, MountPath:"/typha-ca", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"felix-certs", ReadOnly:true, MountPath:"/felix-certs", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cni-log-dir", ReadOnly:true, MountPath:"/var/log/calico/cni", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc00f5d1ec0), ReadinessProbe:(*v1.Probe)(0xc00f5d1ef0), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00ed33200), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00a1943c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-node", DeprecatedServiceAccount:"calico-node", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00817dc00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"calico-priority", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc009cd1ee0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00a1944f8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "calico-node": the object has been modified; please apply your changes to the latest version and try again
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.513980   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "felix-certs" (UniqueName: "kubernetes.io/secret/bc9e1863-6ff9-46f6-9eb8-57d1093471e6-felix-certs") pod "calico-node-2khxz" (UID: "bc9e1863-6ff9-46f6-9eb8-57d1093471e6")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.514000   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-log-dir" (UniqueName: "kubernetes.io/host-path/bc9e1863-6ff9-46f6-9eb8-57d1093471e6-cni-log-dir") pod "calico-node-2khxz" (UID: "bc9e1863-6ff9-46f6-9eb8-57d1093471e6")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.514012   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/bc9e1863-6ff9-46f6-9eb8-57d1093471e6-lib-modules") pod "calico-node-2khxz" (UID: "bc9e1863-6ff9-46f6-9eb8-57d1093471e6")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.514023   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "policysync" (UniqueName: "kubernetes.io/host-path/bc9e1863-6ff9-46f6-9eb8-57d1093471e6-policysync") pod "calico-node-2khxz" (UID: "bc9e1863-6ff9-46f6-9eb8-57d1093471e6")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.514032   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "typha-ca" (UniqueName: "kubernetes.io/configmap/bc9e1863-6ff9-46f6-9eb8-57d1093471e6-typha-ca") pod "calico-node-2khxz" (UID: "bc9e1863-6ff9-46f6-9eb8-57d1093471e6")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.514044   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/bc9e1863-6ff9-46f6-9eb8-57d1093471e6-var-lib-calico") pod "calico-node-2khxz" (UID: "bc9e1863-6ff9-46f6-9eb8-57d1093471e6")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.514055   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/bc9e1863-6ff9-46f6-9eb8-57d1093471e6-xtables-lock") pod "calico-node-2khxz" (UID: "bc9e1863-6ff9-46f6-9eb8-57d1093471e6")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.514131   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/bc9e1863-6ff9-46f6-9eb8-57d1093471e6-cni-bin-dir") pod "calico-node-2khxz" (UID: "bc9e1863-6ff9-46f6-9eb8-57d1093471e6")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.514206   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-kube-controllers-token-bg2hs" (UniqueName: "kubernetes.io/secret/da5b0079-c147-4a84-9996-91d54057fc73-calico-kube-controllers-token-bg2hs") pod "calico-kube-controllers-7c87576bb9-98f6j" (UID: "da5b0079-c147-4a84-9996-91d54057fc73")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.514239   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/bc9e1863-6ff9-46f6-9eb8-57d1093471e6-var-run-calico") pod "calico-node-2khxz" (UID: "bc9e1863-6ff9-46f6-9eb8-57d1093471e6")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.514269   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/bc9e1863-6ff9-46f6-9eb8-57d1093471e6-cni-net-dir") pod "calico-node-2khxz" (UID: "bc9e1863-6ff9-46f6-9eb8-57d1093471e6")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.514314   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvol-driver-host" (UniqueName: "kubernetes.io/host-path/bc9e1863-6ff9-46f6-9eb8-57d1093471e6-flexvol-driver-host") pod "calico-node-2khxz" (UID: "bc9e1863-6ff9-46f6-9eb8-57d1093471e6")
Jan 13 00:53:28 endor k3s[24742]: I0113 00:53:28.514353   24742 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-node-token-d4mnx" (UniqueName: "kubernetes.io/secret/bc9e1863-6ff9-46f6-9eb8-57d1093471e6-calico-node-token-d4mnx") pod "calico-node-2khxz" (UID: "bc9e1863-6ff9-46f6-9eb8-57d1093471e6")
Jan 13 00:53:28 endor k3s[24742]: time="2021-01-13T00:53:28.814047648-08:00" level=info msg="Cluster-Http-Server 2021/01/13 00:53:28 http: TLS handshake error from 10.139.16.39:1100: remote error: tls: bad certificate"
Jan 13 00:53:28 endor k3s[24742]: time="2021-01-13T00:53:28.868061871-08:00" level=info msg="Cluster-Http-Server 2021/01/13 00:53:28 http: TLS handshake error from 10.139.16.39:24914: remote error: tls: bad certificate"
Jan 13 00:53:28 endor k3s[24742]: E0113 00:53:28.870350   24742 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "401a713dbe25338c64ece7d8b64041b1903d53637d105a8292f9233f358a776d": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:28 endor k3s[24742]: E0113 00:53:28.870417   24742 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "401a713dbe25338c64ece7d8b64041b1903d53637d105a8292f9233f358a776d": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:28 endor k3s[24742]: E0113 00:53:28.870440   24742 kuberuntime_manager.go:755] createPodSandbox for pod "calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "401a713dbe25338c64ece7d8b64041b1903d53637d105a8292f9233f358a776d": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:28 endor k3s[24742]: E0113 00:53:28.870507   24742 pod_workers.go:191] Error syncing pod da5b0079-c147-4a84-9996-91d54057fc73 ("calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)"), skipping: failed to "CreatePodSandbox" for "calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)" with CreatePodSandboxError: "CreatePodSandbox for pod \"calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"401a713dbe25338c64ece7d8b64041b1903d53637d105a8292f9233f358a776d\": error getting ClusterInformation: Get \"https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": x509: certificate signed by unknown authority"
Jan 13 00:53:29 endor k3s[24742]: time="2021-01-13T00:53:29.478862789-08:00" level=info msg="Cluster-Http-Server 2021/01/13 00:53:29 http: TLS handshake error from 10.139.16.39:60280: remote error: tls: bad certificate"
Jan 13 00:53:29 endor k3s[24742]: time="2021-01-13T00:53:29.565370952-08:00" level=info msg="Cluster-Http-Server 2021/01/13 00:53:29 http: TLS handshake error from 10.139.16.39:33389: remote error: tls: bad certificate"
Jan 13 00:53:29 endor k3s[24742]: E0113 00:53:29.569500   24742 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "63e8acd8dddf168089d0646a2b449670784a7f6c02b58c85cadd1d9e1c99e5ea": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:29 endor k3s[24742]: E0113 00:53:29.569607   24742 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "63e8acd8dddf168089d0646a2b449670784a7f6c02b58c85cadd1d9e1c99e5ea": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:29 endor k3s[24742]: E0113 00:53:29.569645   24742 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "63e8acd8dddf168089d0646a2b449670784a7f6c02b58c85cadd1d9e1c99e5ea": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:29 endor k3s[24742]: E0113 00:53:29.569758   24742 pod_workers.go:191] Error syncing pod 45d3b9c8-868c-4061-830d-de7c0342ac55 ("coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)"), skipping: failed to "CreatePodSandbox" for "coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"63e8acd8dddf168089d0646a2b449670784a7f6c02b58c85cadd1d9e1c99e5ea\": error getting ClusterInformation: Get \"https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": x509: certificate signed by unknown authority"
Jan 13 00:53:32 endor k3s[24742]: time="2021-01-13T00:53:32.489218618-08:00" level=info msg="Cluster-Http-Server 2021/01/13 00:53:32 http: TLS handshake error from 10.139.16.39:52143: remote error: tls: bad certificate"
Jan 13 00:53:32 endor k3s[24742]: time="2021-01-13T00:53:32.543810654-08:00" level=info msg="Cluster-Http-Server 2021/01/13 00:53:32 http: TLS handshake error from 10.139.16.39:64946: remote error: tls: bad certificate"
Jan 13 00:53:32 endor k3s[24742]: E0113 00:53:32.546870   24742 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9cf44f7aaa1bf1c55df6fe095de809954c85a26a2ce73dcd7cfb488c4a01b2b1": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:32 endor k3s[24742]: E0113 00:53:32.546960   24742 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9cf44f7aaa1bf1c55df6fe095de809954c85a26a2ce73dcd7cfb488c4a01b2b1": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:32 endor k3s[24742]: E0113 00:53:32.547000   24742 kuberuntime_manager.go:755] createPodSandbox for pod "local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9cf44f7aaa1bf1c55df6fe095de809954c85a26a2ce73dcd7cfb488c4a01b2b1": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:32 endor k3s[24742]: E0113 00:53:32.547130   24742 pod_workers.go:191] Error syncing pod f0ca22be-1edc-4df9-b577-d2d5adde7b67 ("local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"9cf44f7aaa1bf1c55df6fe095de809954c85a26a2ce73dcd7cfb488c4a01b2b1\": error getting ClusterInformation: Get \"https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": x509: certificate signed by unknown authority"
Jan 13 00:53:37 endor k3s[24742]: E0113 00:53:37.096173   24742 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Jan 13 00:53:39 endor k3s[24742]: time="2021-01-13T00:53:39.496600455-08:00" level=info msg="Cluster-Http-Server 2021/01/13 00:53:39 http: TLS handshake error from 10.139.16.39:15736: remote error: tls: bad certificate"
Jan 13 00:53:39 endor k3s[24742]: time="2021-01-13T00:53:39.572016044-08:00" level=info msg="Cluster-Http-Server 2021/01/13 00:53:39 http: TLS handshake error from 10.139.16.39:52334: remote error: tls: bad certificate"
Jan 13 00:53:39 endor k3s[24742]: E0113 00:53:39.577648   24742 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "374c0c091e677a106d22b4154b5ae3d4c2570e8045418ec5e7f95ffbf814fe3d": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:39 endor k3s[24742]: E0113 00:53:39.577730   24742 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "metrics-server-86cbb8457f-8znvw_kube-system(7add08c0-8e96-4589-88b7-5d465aa725b0)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "374c0c091e677a106d22b4154b5ae3d4c2570e8045418ec5e7f95ffbf814fe3d": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:39 endor k3s[24742]: E0113 00:53:39.577767   24742 kuberuntime_manager.go:755] createPodSandbox for pod "metrics-server-86cbb8457f-8znvw_kube-system(7add08c0-8e96-4589-88b7-5d465aa725b0)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "374c0c091e677a106d22b4154b5ae3d4c2570e8045418ec5e7f95ffbf814fe3d": error getting ClusterInformation: Get "https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Jan 13 00:53:39 endor k3s[24742]: E0113 00:53:39.577866   24742 pod_workers.go:191] Error syncing pod 7add08c0-8e96-4589-88b7-5d465aa725b0 ("metrics-server-86cbb8457f-8znvw_kube-system(7add08c0-8e96-4589-88b7-5d465aa725b0)"), skipping: failed to "CreatePodSandbox" for "metrics-server-86cbb8457f-8znvw_kube-system(7add08c0-8e96-4589-88b7-5d465aa725b0)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-86cbb8457f-8znvw_kube-system(7add08c0-8e96-4589-88b7-5d465aa725b0)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"374c0c091e677a106d22b4154b5ae3d4c2570e8045418ec5e7f95ffbf814fe3d\": error getting ClusterInformation: Get \"https://10.43.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": x509: certificate signed by unknown authority"
Jan 13 00:53:42 endor k3s[24742]: E0113 00:53:42.792811   24742 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "431a4171077a48159b5a136ad45a788e3de383943d844b9b87cf9cc8be6c5662": cannot find a qualified ippool
Jan 13 00:53:42 endor k3s[24742]: E0113 00:53:42.792907   24742 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "431a4171077a48159b5a136ad45a788e3de383943d844b9b87cf9cc8be6c5662": cannot find a qualified ippool
Jan 13 00:53:42 endor k3s[24742]: E0113 00:53:42.792945   24742 kuberuntime_manager.go:755] createPodSandbox for pod "calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "431a4171077a48159b5a136ad45a788e3de383943d844b9b87cf9cc8be6c5662": cannot find a qualified ippool
Jan 13 00:53:42 endor k3s[24742]: E0113 00:53:42.793055   24742 pod_workers.go:191] Error syncing pod da5b0079-c147-4a84-9996-91d54057fc73 ("calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)"), skipping: failed to "CreatePodSandbox" for "calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)" with CreatePodSandboxError: "CreatePodSandbox for pod \"calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"431a4171077a48159b5a136ad45a788e3de383943d844b9b87cf9cc8be6c5662\": cannot find a qualified ippool"
Jan 13 00:53:43 endor k3s[24742]: E0113 00:53:43.760367   24742 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "52a526dcd6824f8846a8aaf7a699e6ad19818f15f10380624543aadfdb2534a8": cannot find a qualified ippool
Jan 13 00:53:43 endor k3s[24742]: E0113 00:53:43.760426   24742 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "52a526dcd6824f8846a8aaf7a699e6ad19818f15f10380624543aadfdb2534a8": cannot find a qualified ippool
Jan 13 00:53:43 endor k3s[24742]: E0113 00:53:43.760443   24742 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "52a526dcd6824f8846a8aaf7a699e6ad19818f15f10380624543aadfdb2534a8": cannot find a qualified ippool
Jan 13 00:53:43 endor k3s[24742]: E0113 00:53:43.760498   24742 pod_workers.go:191] Error syncing pod 45d3b9c8-868c-4061-830d-de7c0342ac55 ("coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)"), skipping: failed to "CreatePodSandbox" for "coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"52a526dcd6824f8846a8aaf7a699e6ad19818f15f10380624543aadfdb2534a8\": cannot find a qualified ippool"
Jan 13 00:53:44 endor k3s[24742]: E0113 00:53:44.093494   24742 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "03b90028f962124d904585f1aebdf59104c6f658b0d621f6521e202e56d43326": cannot find a qualified ippool
Jan 13 00:53:44 endor k3s[24742]: E0113 00:53:44.093552   24742 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "03b90028f962124d904585f1aebdf59104c6f658b0d621f6521e202e56d43326": cannot find a qualified ippool
Jan 13 00:53:44 endor k3s[24742]: E0113 00:53:44.093571   24742 kuberuntime_manager.go:755] createPodSandbox for pod "calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "03b90028f962124d904585f1aebdf59104c6f658b0d621f6521e202e56d43326": cannot find a qualified ippool
Jan 13 00:53:44 endor k3s[24742]: E0113 00:53:44.093636   24742 pod_workers.go:191] Error syncing pod da5b0079-c147-4a84-9996-91d54057fc73 ("calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)"), skipping: failed to "CreatePodSandbox" for "calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)" with CreatePodSandboxError: "CreatePodSandbox for pod \"calico-kube-controllers-7c87576bb9-98f6j_calico-system(da5b0079-c147-4a84-9996-91d54057fc73)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"03b90028f962124d904585f1aebdf59104c6f658b0d621f6521e202e56d43326\": cannot find a qualified ippool"
Jan 13 00:53:44 endor k3s[24742]: E0113 00:53:44.099612   24742 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "2dd8382f2faecfee6c3d1c37eaeeceab492a630454d1cc08c90cbd8b525c2a2d": cannot find a qualified ippool
Jan 13 00:53:44 endor k3s[24742]: E0113 00:53:44.099663   24742 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "2dd8382f2faecfee6c3d1c37eaeeceab492a630454d1cc08c90cbd8b525c2a2d": cannot find a qualified ippool
Jan 13 00:53:44 endor k3s[24742]: E0113 00:53:44.099680   24742 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "2dd8382f2faecfee6c3d1c37eaeeceab492a630454d1cc08c90cbd8b525c2a2d": cannot find a qualified ippool
Jan 13 00:53:44 endor k3s[24742]: E0113 00:53:44.099731   24742 pod_workers.go:191] Error syncing pod 45d3b9c8-868c-4061-830d-de7c0342ac55 ("coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)"), skipping: failed to "CreatePodSandbox" for "coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-854c77959c-58hnv_kube-system(45d3b9c8-868c-4061-830d-de7c0342ac55)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd8382f2faecfee6c3d1c37eaeeceab492a630454d1cc08c90cbd8b525c2a2d\": cannot find a qualified ippool"
Jan 13 00:53:44 endor k3s[24742]: W0113 00:53:44.295994   24742 handler_proxy.go:102] no RequestInfo found in the context
Jan 13 00:53:44 endor k3s[24742]: E0113 00:53:44.296118   24742 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Jan 13 00:53:44 endor k3s[24742]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Jan 13 00:53:44 endor k3s[24742]: I0113 00:53:44.296138   24742 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Jan 13 00:53:44 endor k3s[24742]: time="2021-01-13T00:53:44.380089998-08:00" level=info msg="Cluster-Http-Server 2021/01/13 00:53:44 http: TLS handshake error from 127.0.0.1:54880: remote error: tls: bad certificate"
Jan 13 00:53:46 endor k3s[24742]: E0113 00:53:46.780265   24742 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "29596ae0dc0d5e90b5e22146c280555787e0575dfcbc872daa6727a510604a67": cannot find a qualified ippool
Jan 13 00:53:46 endor k3s[24742]: E0113 00:53:46.780926   24742 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "29596ae0dc0d5e90b5e22146c280555787e0575dfcbc872daa6727a510604a67": cannot find a qualified ippool
Jan 13 00:53:46 endor k3s[24742]: E0113 00:53:46.780965   24742 kuberuntime_manager.go:755] createPodSandbox for pod "local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "29596ae0dc0d5e90b5e22146c280555787e0575dfcbc872daa6727a510604a67": cannot find a qualified ippool
Jan 13 00:53:46 endor k3s[24742]: E0113 00:53:46.781038   24742 pod_workers.go:191] Error syncing pod f0ca22be-1edc-4df9-b577-d2d5adde7b67 ("local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"29596ae0dc0d5e90b5e22146c280555787e0575dfcbc872daa6727a510604a67\": cannot find a qualified ippool"
Jan 13 00:53:47 endor k3s[24742]: E0113 00:53:47.112336   24742 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d54549c86ce48b594c777c1f0114a9faa47a29792186ca3645a420d21378b892": cannot find a qualified ippool
Jan 13 00:53:47 endor k3s[24742]: E0113 00:53:47.112406   24742 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d54549c86ce48b594c777c1f0114a9faa47a29792186ca3645a420d21378b892": cannot find a qualified ippool
Jan 13 00:53:47 endor k3s[24742]: E0113 00:53:47.112438   24742 kuberuntime_manager.go:755] createPodSandbox for pod "local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d54549c86ce48b594c777c1f0114a9faa47a29792186ca3645a420d21378b892": cannot find a qualified ippool
Jan 13 00:53:47 endor k3s[24742]: E0113 00:53:47.112512   24742 pod_workers.go:191] Error syncing pod f0ca22be-1edc-4df9-b577-d2d5adde7b67 ("local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-7c458769fb-zhxvk_kube-system(f0ca22be-1edc-4df9-b577-d2d5adde7b67)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"d54549c86ce48b594c777c1f0114a9faa47a29792186ca3645a420d21378b892\": cannot find a qualified ippool"
Jan 13 00:53:49 endor k3s[24742]: I0113 00:53:49.398021   24742 image_gc_manager.go:304] [imageGCManager]: Disk usage on image filesystem is at 93% which is over the high threshold (85%). Trying to free 6666727424 bytes down to the low threshold (80%).
Jan 13 00:53:49 endor k3s[24742]: E0113 00:53:49.400480   24742 kubelet.go:1267] Image garbage collection failed multiple times in a row: failed to garbage collect required amount of images. Wanted to free 6666727424 bytes, but freed 0 bytes
Jan 13 00:53:55 endor k3s[24742]: time="2021-01-13T00:53:55.560399241-08:00" level=info msg="Cluster-Http-Server 2021/01/13 00:53:55 http: TLS handshake error from 127.0.0.1:54982: remote error: tls: bad certificate"
Jan 13 00:53:59 endor k3s[24742]: I0113 00:53:59.593728   24742 request.go:655] Throttling request took 1.047931117s, request: GET:https://127.0.0.1:6444/apis/autoscaling/v1?timeout=32s
Jan 13 00:54:00 endor k3s[24742]: W0113 00:54:00.645313   24742 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
frozenprocess commented 3 years ago

@hickersonj out of curiosity what is the output of ? k3s crictl ps -a | grep flexvol-driver

k3s crictl logs $(k3s crictl ps -a | grep flexvol-driver | awk '{print $1}')

hickersonj commented 3 years ago

Here is the output of both, the last command didn't result in any log output:

k3s crictl ps -a | grep flexvol-driver
ff8521c3a503c       819d15844f0c9       2 minutes ago        Exited              flexvol-driver            0                   e4f59ad60a75c

k3s crictl logs $(k3s crictl ps -a | grep flexvol-driver | awk '{print $1}')

Some more logs without the parsing:

k3s crictl logs ff8521c3a503c

k3s crictl ps -a
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
68872d2d0c084       278f40d9f3b82       4 minutes ago       Running             calico-kube-controllers   0                   87d51843f04ad
fa578ad18fa5d       9d12f9848b99f       4 minutes ago       Running             local-path-provisioner    0                   386305ecaa5d4
5b322f4abc454       9dd718864ce61       5 minutes ago       Running             metrics-server            0                   1c1eda77cfa7d
d5ed01477aea6       c4d3d16fe508b       5 minutes ago       Running             coredns                   0                   2d4d5fe1eb475
2349fa0222c26       183b53858d7da       5 minutes ago       Running             calico-node               0                   e4f59ad60a75c
b10d09ea55698       64e5dfd8d597a       5 minutes ago       Exited              install-cni               0                   e4f59ad60a75c
ac2622b58fb4a       919a16510f41b       5 minutes ago       Running             calico-typha              0                   931b58170d8ec
ff8521c3a503c       819d15844f0c9       5 minutes ago       Exited              flexvol-driver            0                   e4f59ad60a75c
2d4e3bd8ee81f       607a9f677b8fa       7 minutes ago       Running             tigera-operator           0                   880d187d37302
frozenprocess commented 3 years ago

In the host run

ls /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/

By the way my guess is as good as yours :P

hickersonj commented 3 years ago

Your help is very much appreciated, been stuck on this for a few days!

bash-5.0# ls /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/
uds
frozenprocess commented 3 years ago

What is your CPU arch? uname -a

hickersonj commented 3 years ago

It is a custom built kernel running on top of Fedora 30:

Linux nc1 4.19.78 #79 SMP Tue Jan 12 22:56:09 PST 2021 x86_64 x86_64 x86_64 GNU/Linux
frozenprocess commented 3 years ago

crictl exec -it $(k3s crictl ps -a | grep calico-node | awk '{print $1}') /bin/sh ls /bin/

hickersonj commented 3 years ago

Those are the odd errors that I don't see on the working system:

file /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds
/usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=0uJXqxbPHOtaXzcZ2iBw/hd47qJlN-5w-Lc5nymi8/A68auSkTf7srYUfBQ9Lm/zVr0VumEu1ufcXRY1qir, not stripped

bash-5.0# /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds init
{"status":"Success","message":"Init ok.","capabilities":{"attach":false}}

Also what's odd is that uds is there...

frozenprocess commented 3 years ago

Those are the odd errors that I don't see on the working system:

file /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds
/usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=0uJXqxbPHOtaXzcZ2iBw/hd47qJlN-5w-Lc5nymi8/A68auSkTf7srYUfBQ9Lm/zVr0VumEu1ufcXRY1qir, not stripped

bash-5.0# /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds init
{"status":"Success","message":"Init ok.","capabilities":{"attach":false}}

Also what's odd is that uds is there...

seems like it takes a bit to create uds binary . that is why those issues are there at the start which gets fixed later on.

frozenprocess commented 3 years ago

mind running this ? crictl exec -it $(k3s crictl ps -a | grep calico-node | awk '{print $1}') /bin/sh ls /bin/

hickersonj commented 3 years ago
bash-5.0# k3s crictl exec -it $(k3s crictl ps -a | grep calico-node | awk '{print $1}') /bin/sh
sh-5.0# ls /bin/
atop      bc     chmod  cmp   cut       dd    dircolors   dos2unix  egrep      false  free        getfacl  gzip  iostat  iperf       killall  link    ls      md5sum    mktemp      mpstat    ncftpput  nsenter  ping   pstree    renice   rsync   sensors    sh     stat     su     tac      tee     top              tpm2_load     tpm2_startup      traceroute6  udevinfo  unlink  vi      watch  whoami
awk   bmon   chown  cp    date      df    dirname     du    env    fgrep  ftp         gettext  head  ipcalc  iperf3      kmod     ln      lsof    mkcramfs  more        mv        netstat   od   pkill  pwd   rm       scp     setfacl    sleep  strace   sudo   tail     telnet  touch            tpm2_nvlist   tpm2_verifysignature  tree     udevtest  unzip   view    wc     xargs
basename  cat    cksum  cpio  dbus-daemon   diff  dmesg       echo  expr       file   fusermount  grep     hostname  ipcrm   journalctl  last     logger  lsscsi  mkdir mount       ncftpget  nice  openssl  pmap   quota     rmdir    script  setfont    sort   strings  sync   tar      test    tpm2_getcap          tpm2_nvread   tr            true     umount    uptime  vmstat  wget   yes
bash      chgrp  clear  curl  dbus-uuidgen  dig   domainname  ed    fallocate  find   gawk        gunzip   id    ipcs    kill        ldd      login   lsyncd  mknod mountpoint  ncftpls   nohup pidof    ps readlink  rpcinfo  sed     setserial  ssh    stty     systemctl  taskset  tftp    tpm2_listpersistent  tpm2_pcrlist  traceroute        udevadm  uname     usleep  w   which  zip
frozenprocess commented 3 years ago

That doesn't look like calico-node /bin directory!

hickersonj commented 3 years ago

Hmm I am currently going through the logs of each node. Not sure if you can see the logs here, ones labeled bad are from the non-working k3s install: https://drive.google.com/drive/folders/12azv6N8WuJwWo3OGbiby1Zzfb5sLyW1Q?usp=sharing

hickersonj commented 3 years ago

From the calico-node-bad.log and calico-node.log there are some alarming messages, but they are listed as [INFO]. Specifically the failed BPF programs and missing vxlan tunnel below:

2021-01-13 23:32:05.721 [INFO][59] felix/route_table.go 241: Calculated interface name regexp regex="^vxlan.calico$"
2021-01-13 23:32:05.721 [INFO][59] felix/vxlan_mgr.go 340: VXLAN tunnel device thread started. mtu=1450
2021-01-13 23:32:05.722 [WARNING][59] felix/int_dataplane.go 448: Can't enable XDP acceleration. error=/sys/fs/bpf is not mounted
2021-01-13 23:32:05.723 [INFO][59] felix/connecttime.go 46: Running bpftool to look up programs attached to cgroup args=[]string{"bpftool", "-j", "-p", "cgroup", "show", "/run/calico/cgroup"}
2021-01-13 23:32:05.724 [INFO][59] felix/connecttime.go 49: Failed to list BPF programs.  Assuming not supported/nothing to clean up. error=exit status 255 output="[]\n"
2021-01-13 23:32:05.724 [INFO][59] felix/int_dataplane.go 517: Failed to remove BPF connect-time load balancer, ignoring. error=exit status 255
2021-01-13 23:32:05.728 [INFO][59] felix/cleanup.go 37: Failed to list BPF maps, assuming there's nothing to clean up. error=exit status 255
2021-01-13 23:32:05.728 [INFO][59] felix/route_table.go 241: Calculated interface name regexp regex="^cali.*"
2021-01-13 23:32:05.728 [INFO][59] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="all-ipam-pools" setType="hash:net"
2021-01-13 23:32:05.728 [INFO][59] felix/ipsets.go 119: Queueing IP set for creation family="inet" setID="masq-ipam-pools" setType="hash:net"
2021-01-13 23:32:05.728 [INFO][59] felix/route_table.go 241: Calculated interface name regexp regex="^wireguard.cali$"
2021-01-13 23:32:05.728 [INFO][59] felix/int_dataplane.go 772: Registering to report health.
2021-01-13 23:32:05.730 [INFO][59] felix/int_dataplane.go 1500: attempted to modprobe nf_conntrack_proto_sctp error=exit status 1 output=""
2021-01-13 23:32:05.730 [INFO][59] felix/int_dataplane.go 1502: Making sure IPv4 forwarding is enabled.

2021-01-13 23:32:05.782 [INFO][59] felix/vxlan_resolver.go 247: Missing vxlan tunnel address for node, cannot send VTEP yet node="nc1"
hickersonj commented 3 years ago

That doesn't look like calico-node /bin directory!

That looks like the bin directory of my host

frozenprocess commented 3 years ago

what happens if you delete calico-node, deamonset should create a new one

kubectl delete pod calico-node-xxxxx -n calico-system

hickersonj commented 3 years ago

It does create a new one, but the same host path in /bin

frozenprocess commented 3 years ago

My container returns this

[root@localhost ~]# crictl ps -a | grep calico-node
6a958de6312f3       183b53858d7da       About a minute ago   Running             calico-node               1                   4b3eac279a308
[root@localhost ~]# crictl exec -it 6a958de6312f3 /bin/sh
sh-4.4# ls /bin
'['     bash          bird.conf    birdcl    calico-node   coreutils   echo    find  grep   kill   ls      more     read       sed     sort   tee       touch   ulimit    wait     yes     zmore
 alias      bird          bird6    birdcl6   cat       cp          env     getopt    gzip   kmod   mkdir   nice     readlink   sh      stat   test      true    unalias   which    zcat
 basename   bird-wrapper.sh   bird6.conf   bpftool   cd        date        false   getopts   join   ln     mknod   printf   rm     sleep   tail   timeout   tsort   uniq      whoami   zless
hickersonj commented 3 years ago

The binary is even on the system!

bash-5.0# /mnt/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29/fs/usr/bin/calico-node -v
v3.17.1

Baffling..but I suppose it has to be on the system since it is logging.

hickersonj commented 3 years ago

crictl ps -a | grep calico-node

Mine is similar:

bash-5.0# k3s crictl ps -a | grep calico-node
d2f8f651b20ea       183b53858d7da       14 minutes ago      Running             calico-node               0                   38285ec8d8336
bash-5.0# k3s crictl exec -it d2f8f651b20ea /bin/sh
sh-5.0#

When you perform an ls of /bin you see binaries like bird* and calico-node though?

Actually I see it on my working system, it is the calico-node binaries. So there is something blocking it from fully starting, at least it's narrowed down to that..

frozenprocess commented 3 years ago
sh-4.4# ls -alih /bin
26219320 lrwxrwxrwx. 1 root root 7 Apr 23  2020 /bin -> usr/bin

Check if calico-node bin is in /usr/bin/ , maybe symlink failed somehow?

hickersonj commented 3 years ago

Yes the link is there on the host file system, but I think something else is messed up with k3s and creating the pod environment. Every node does not have it's own filesystem env, it is just the hosts.

Here is what the system with the issue file system for the local-path-provisioner looks like (this is just k3s). This file listing is on the host, but within the container folder rootfs:

bash-5.0# ls /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/5d9aef94dd2b83e025b595822e503e0ca43a90962e4abdd5a0c66d7def9ec4d3/rootfs/bin/
arch  base64    busybox  chgrp  chown   cp    dd  dmesg      dumpkmap  ed     false    fdflush  fsync   grep    gzip      ionice  ipcalc    kill  linux32  ln     ls    makemime  mknod   more   mountpoint  mv   nice   ping   pipe_progress  ps   reformime  rm     run-parts  setpriv    sh     stat  su    tar    true    uname   watch
ash   bbconfig  cat  chmod  conspy  date  df  dnsdomainname  echo      egrep  fatattr  fgrep    getopt  gunzip  hostname  iostat  kbd_mode  link  linux64  login  lzop  mkdir     mktemp  mount  mpstat  netstat  pidof  ping6  printenv       pwd  rev        rmdir  sed    setserial  sleep  stty  sync  touch  umount  usleep  zcat

However, within the pod, it is the host filesystem in /bin:

bash-5.0# k3s crictl exec -it 5d9aef94dd2b8 ls /bin/
atop          expr    ls          rmdir        tpm2_load
awk       fallocate   lsof        rpcinfo          tpm2_nvlist
basename      false   lsscsi      rsync        tpm2_nvread
bash          fgrep   lsyncd      scp          tpm2_pcrlist
bc        file    md5sum      script           tpm2_startup
bmon          find    mkcramfs    sed          tpm2_verifysignature
cat       free    mkdir       sensors          tr
chgrp         ftp     mknod       setfacl          traceroute
chmod         fusermount  mktemp      setfont          traceroute6
chown         gawk    more        setserial        tree
cksum         getfacl     mount       sh           true
clear         gettext     mountpoint  sleep        udevadm
cmp       grep    mpstat      sort         udevinfo
cp        gunzip      mv          ssh          udevtest
cpio          gzip    ncftpget    stat         umount
curl          head    ncftpls     strace           uname
cut       hostname    ncftpput    strings          unlink
date          id      netstat     stty         unzip
dbus-daemon   iostat      nice        su           uptime
dbus-uuidgen  ipcalc      nohup       sudo         usleep
dd        ipcrm   nsenter     sync         vi
df        ipcs    od          systemctl        view
diff          iperf   openssl     tac          vmstat
dig       iperf3      pidof       tail         w
dircolors     journalctl  ping        tar          watch
dirname       kill    pkill       taskset          wc
dmesg         killall     pmap        tee          wget
domainname    kmod    ps          telnet           which
dos2unix      last    pstree      test         whoami
du        ldd     pwd         tftp         xargs
echo          link    quota       top          yes
ed        ln      readlink    touch        zip
egrep         logger      renice      tpm2_getcap
env       login   rm          tpm2_listpersistent

I would expect them to be the same.

frozenprocess commented 3 years ago

k3s check-config at the end it should be like

- Storage Drivers:
  - "overlay":
    - CONFIG_OVERLAY_FS: enabled (as module)
hickersonj commented 3 years ago

Ok I don't even know how this setup is partially working with how many items are missing...

This is a huge help major thanks!! I'll update the kernel configs and try again:

bash-5.0# k3s check-config /root/config

Verifying binaries in /var/lib/rancher/k3s/data/3a8d3d90c0ac3531edbdbde77ce4a85062f4af8865b98cedc30ea730715d9d48/bin:
- sha256sum: good
- links: good

System:
- /usr/sbin iptables v1.6.2: older than v1.8
- swap: disabled
- routes: ok

Limits:
- /proc/sys/kernel/keys/root_maxkeys: 1000000

modprobe: module configs not found in modules.dep
info: reading kernel config from /root/config ...

Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: missing (fail)
- CONFIG_IPC_NS: missing (fail)
- CONFIG_UTS_NS: missing (fail)
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: missing (fail)
- CONFIG_CGROUP_DEVICE: missing (fail)
- CONFIG_CGROUP_FREEZER: missing (fail)
- CONFIG_CGROUP_SCHED: missing (fail)
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: missing (fail)
- CONFIG_KEYS: enabled
- CONFIG_VETH: enabled
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_NF_NAT_IPV4: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: missing (fail)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_IPVS: missing (fail)
- CONFIG_IP_NF_NAT: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_NF_NAT_NEEDED: enabled
- CONFIG_POSIX_MQUEUE: enabled

Optional Features:
- CONFIG_USER_NS: missing
- CONFIG_SECCOMP: missing
- CONFIG_CGROUP_PIDS: missing
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_CGROUP_PERF: missing
- CONFIG_CGROUP_HUGETLB: missing
- CONFIG_NET_CLS_CGROUP: missing
- CONFIG_CGROUP_NET_PRIO: missing
- CONFIG_CFS_BANDWIDTH: missing
- CONFIG_FAIR_GROUP_SCHED: missing
- CONFIG_RT_GROUP_SCHED: missing
- CONFIG_IP_NF_TARGET_REDIRECT: enabled (as module)
- CONFIG_IP_SET: enabled (as module)
- CONFIG_IP_VS: enabled
- CONFIG_IP_VS_NFCT: missing
- CONFIG_IP_VS_PROTO_TCP: enabled
- CONFIG_IP_VS_PROTO_UDP: enabled
- CONFIG_IP_VS_RR: enabled
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
  - "overlay":
    - CONFIG_VXLAN: missing
      Optional (for encrypted networks):
      - CONFIG_CRYPTO: enabled
      - CONFIG_CRYPTO_AEAD: missing
      - CONFIG_CRYPTO_GCM: missing
      - CONFIG_CRYPTO_SEQIV: missing
      - CONFIG_CRYPTO_GHASH: missing
      - CONFIG_XFRM: enabled
      - CONFIG_XFRM_USER: missing
      - CONFIG_XFRM_ALGO: enabled
      - CONFIG_INET_ESP: missing
      - CONFIG_INET_XFRM_MODE_TRANSPORT: missing
- Storage Drivers:
  - "overlay":
    - CONFIG_OVERLAY_FS: missing
frozenprocess commented 3 years ago

Well! at least we found the problem ! 👍

hickersonj commented 3 years ago

Must be a long day for me, I was showing the wrong kernel config, looks like everything is correct there:

bash-5.0# k3s check-config /root/config64

Verifying binaries in /var/lib/rancher/k3s/data/3a8d3d90c0ac3531edbdbde77ce4a85062f4af8865b98cedc30ea730715d9d48/bin:
- sha256sum: good
- links: good

System:
- /usr/sbin iptables v1.6.2: older than v1.8
- swap: disabled
- routes: ok

Limits:
- /proc/sys/kernel/keys/root_maxkeys: 1000000

modprobe: module configs not found in modules.dep
info: reading kernel config from /root/config64 ...

Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_VETH: enabled
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_NF_NAT_IPV4: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_IPVS: enabled
- CONFIG_IP_NF_NAT: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_NF_NAT_NEEDED: enabled
- CONFIG_POSIX_MQUEUE: enabled

Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: enabled
- CONFIG_NET_CLS_CGROUP: enabled
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: enabled
- CONFIG_IP_NF_TARGET_REDIRECT: enabled (as module)
- CONFIG_IP_SET: enabled (as module)
- CONFIG_IP_VS: enabled
- CONFIG_IP_VS_NFCT: enabled
- CONFIG_IP_VS_PROTO_TCP: enabled
- CONFIG_IP_VS_PROTO_UDP: enabled
- CONFIG_IP_VS_RR: enabled
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
  - "overlay":
    - CONFIG_VXLAN: enabled
      Optional (for encrypted networks):
      - CONFIG_CRYPTO: enabled
      - CONFIG_CRYPTO_AEAD: enabled
      - CONFIG_CRYPTO_GCM: enabled
      - CONFIG_CRYPTO_SEQIV: enabled
      - CONFIG_CRYPTO_GHASH: enabled
      - CONFIG_XFRM: enabled
      - CONFIG_XFRM_USER: enabled
      - CONFIG_XFRM_ALGO: enabled
      - CONFIG_INET_ESP: enabled
      - CONFIG_INET_XFRM_MODE_TRANSPORT: enabled
- Storage Drivers:
  - "overlay":
    - CONFIG_OVERLAY_FS: enabled

STATUS: pass
frozenprocess commented 3 years ago

@hickersonj I did an airgap installation using and fedora 30.

Installer arguments

K3S_KUBECONFIG_MODE="644" INSTALL_K3S_SKIP_DOWNLOAD=true  INSTALL_K3S_EXEC="--flannel-backend=none --cluster-cidr=172.16.0.0/16 --disable-network-policy --disable=traefik" ./install.sh 

Using this tutorial created my private registry and pushed necessary images into it.

ice@Rezas-MacBook-Pro ~ % docker image ls | grep localhost
localhost:5000/calico/pod2daemon-flexvol   v3.17.1   819d15844f0c   4 weeks ago    21.7MB
localhost:5000/calico/cni                  v3.17.1   64e5dfd8d597   4 weeks ago    128MB
localhost:5000/tigera/operator             v1.13.2   607a9f677b8f   4 weeks ago    47MB
localhost:5000/calico/node                 v3.17.1   183b53858d7d   4 weeks ago    165MB
localhost:5000/calico/kube-controllers     v3.17.1   278f40d9f3b8   4 weeks ago    52.1MB
localhost:5000/calico/typha                v3.17.1   919a16510f41   4 weeks ago    51.5MB

[root@localhost ~]# cat /etc/rancher/k3s/registries.yaml 
mirrors:
  quay.io:
    endpoint:
      - "http://192.168.209.1:5000"
  docker.io:
    endpoint:
      - "http://192.168.209.1:5000"

everything works as expected

[root@localhost ~]# kubectl get pods -A
NAMESPACE         NAME                                       READY   STATUS             RESTARTS   AGE
tigera-operator   tigera-operator-657cc89589-ktftm           1/1     Running            0          6h46m
calico-system     calico-typha-7bb97bbd5b-5pp5n              1/1     Running            0          6h46m
calico-system     calico-node-xpjch                          1/1     Running            0          6h46m
kube-system       metrics-server-86cbb8457f-67cfp            1/1     Running            0          6h46m
calico-system     calico-kube-controllers-7c87576bb9-wxnzv   1/1     Running            0          6h46m
kube-system       local-path-provisioner-7c458769fb-lx66v    1/1     Running            0          6h46m
kube-system       coredns-854c77959c-f6j48                   0/1     CrashLoopBackOff   42         6h41m
hickersonj commented 3 years ago

Hi @frozenprocess, I spent quite sometime last night figuring it out and got it to work. It turns out the setup I was using was not working because it is in an initramfs. Initially when setting up k3s, I've been using a chroot to combat the pivot_root's inability to switch root when on an initramfs. In fact I've been doing it this way for some time and thought pods were working fine such as Alpine etc., but the calico-node and your suggestions proved otherwise.

The odd thing is that it could break out of that root and see the host's "/" directory within the calico container and other containers, how that's possible I'm not sure of.

I changed the containerd config.toml to use NoPivotRoot, apparently a newish config for the runtime io.containerd.runc.v2. Not only does calico now work, but I don't need to have a chroot. Though this has other security concerns that I'll need to work through.

NAMESPACE         NAME                                       READY   STATUS    RESTARTS   AGE
tigera-operator   tigera-operator-657cc89589-2sbrq           1/1     Running   2          16m
calico-system     calico-typha-f9bd5fdff-5fml8               1/1     Running   1          15m
kube-system       coredns-854c77959c-xtvtn                   1/1     Running   0          16m
kube-system       metrics-server-86cbb8457f-wt58q            1/1     Running   0          16m
kube-system       local-path-provisioner-7c458769fb-vsr5p    1/1     Running   0          16m
calico-system     calico-kube-controllers-7c87576bb9-wclgp   1/1     Running   0          15m
calico-system     calico-node-tsrgg                          1/1     Running   1          15m

Really appreciate your help in debugging this!