Closed cui3093 closed 5 months ago
All contents exceeded the limit of 65535 bytes, so I seperately added:
2023-05-12 02:01:06.951 [INFO][1] main.go 103: Loaded configuration from environment config=&config.Config{LogLevel:"info", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", DatastoreType:"etcdv3"}
W0512 02:01:06.958451 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2023-05-12 02:01:06.959 [INFO][1] main.go 127: Ensuring Calico datastore is initialized
2023-05-12 02:01:06.985 [INFO][1] main.go 153: Calico datastore is initialized
2023-05-12 02:01:06.986 [INFO][1] main.go 190: Getting initial config snapshot from datastore
2023-05-12 02:01:07.397 [INFO][1] resources.go 350: Main client watcher loop
2023-05-12 02:01:07.397 [INFO][1] main.go 193: Got initial config snapshot
2023-05-12 02:01:07.398 [INFO][1] watchersyncer.go 89: Start called
2023-05-12 02:01:07.398 [INFO][1] main.go 207: Starting status report routine
2023-05-12 02:01:07.398 [INFO][1] main.go 216: Starting Prometheus metrics server on port 9094
2023-05-12 02:01:07.398 [INFO][1] main.go 495: Starting informer informer=&cache.sharedIndexInformer{indexer:(*cache.cache)(0xc0001266a8), controller:cache.Controller(nil), processor:(*cache.sharedProcessor)(0xc00052cb60), cacheMutationDetector:cache.dummyMutationDetector{}, listerWatcher:(*cache.ListWatch)(0xc000126690), objectType:(*v1.Pod)(0xc0003ca400), resyncCheckPeriod:0, defaultEventHandlerResyncPeriod:0, clock:(*clock.RealClock)(0x3013200), started:false, stopped:false, startedLock:sync.Mutex{state:0, sema:0x0}, blockDeltas:sync.Mutex{state:0, sema:0x0}, watchErrorHandler:(cache.WatchErrorHandler)(nil), transform:(cache.TransformFunc)(nil)}
2023-05-12 02:01:07.398 [INFO][1] main.go 495: Starting informer informer=&cache.sharedIndexInformer{indexer:(*cache.cache)(0xc0001266f0), controller:cache.Controller(nil), processor:(*cache.sharedProcessor)(0xc00052cbd0), cacheMutationDetector:cache.dummyMutationDetector{}, listerWatcher:(*cache.ListWatch)(0xc0001266d8), objectType:(*v1.Node)(0xc0003cc300), resyncCheckPeriod:0, defaultEventHandlerResyncPeriod:0, clock:(*clock.RealClock)(0x3013200), started:false, stopped:false, startedLock:sync.Mutex{state:0, sema:0x0}, blockDeltas:sync.Mutex{state:0, sema:0x0}, watchErrorHandler:(cache.WatchErrorHandler)(nil), transform:(cache.TransformFunc)(nil)}
2023-05-12 02:01:07.398 [INFO][1] main.go 501: Starting controller ControllerType="Pod"
2023-05-12 02:01:07.398 [INFO][1] main.go 501: Starting controller ControllerType="Namespace"
2023-05-12 02:01:07.398 [INFO][1] main.go 501: Starting controller ControllerType="NetworkPolicy"
2023-05-12 02:01:07.398 [INFO][1] main.go 501: Starting controller ControllerType="Node"
2023-05-12 02:01:07.399 [INFO][1] main.go 501: Starting controller ControllerType="ServiceAccount"
2023-05-12 02:01:07.399 [INFO][1] serviceaccount_controller.go 152: Starting ServiceAccount/Profile controller
I0512 02:01:07.399289 1 shared_informer.go:255] Waiting for caches to sync for service-accounts
2023-05-12 02:01:07.399 [INFO][1] watchersyncer.go 130: Sending status update Status=wait-for-ready
2023-05-12 02:01:07.399 [INFO][1] syncer.go 86: Node controller syncer status updated: wait-for-ready
2023-05-12 02:01:07.399 [INFO][1] watchersyncer.go 149: Starting main event processing loop
2023-05-12 02:01:07.399 [INFO][1] watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/ippools"
2023-05-12 02:01:07.400 [INFO][1] main.go 342: Starting periodic etcdv3 compaction period=10m0s
2023-05-12 02:01:07.403 [INFO][1] watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/nodes"
2023-05-12 02:01:07.404 [INFO][1] watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/clusterinformations"
2023-05-12 02:01:07.405 [INFO][1] watchercache.go 181: Full resync is required ListRoot="/calico/ipam/v2/assignment/"
2023-05-12 02:01:07.405 [INFO][1] pod_controller.go 226: Starting Pod/WorkloadEndpoint controller
2023-05-12 02:01:07.405 [INFO][1] namespace_controller.go 158: Starting Namespace/Profile controller
I0512 02:01:07.405621 1 shared_informer.go:255] Waiting for caches to sync for namespaces
2023-05-12 02:01:07.405 [INFO][1] policy_controller.go 149: Starting NetworkPolicy controller
I0512 02:01:07.406070 1 shared_informer.go:255] Waiting for caches to sync for network-policies
2023-05-12 02:01:07.406 [INFO][1] controller.go 193: Starting Node controller
I0512 02:01:07.406131 1 shared_informer.go:255] Waiting for caches to sync for nodes
I0512 02:01:07.415883 1 shared_informer.go:255] Waiting for caches to sync for pods
2023-05-12 02:01:07.416 [INFO][1] watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/ippools"
2023-05-12 02:01:07.417 [INFO][1] watchercache.go 294: Sending synced update ListRoot="/calico/ipam/v2/assignment/"
2023-05-12 02:01:07.417 [INFO][1] watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/clusterinformations"
2023-05-12 02:01:07.418 [INFO][1] watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/nodes"
2023-05-12 02:01:07.418 [INFO][1] watchersyncer.go 130: Sending status update Status=resync
2023-05-12 02:01:07.418 [INFO][1] syncer.go 86: Node controller syncer status updated: resync
2023-05-12 02:01:07.418 [INFO][1] watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:07.418 [INFO][1] watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:07.418 [INFO][1] watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:07.418 [INFO][1] watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:07.418 [INFO][1] watchersyncer.go 221: All watchers have sync'd data - sending data and final sync
2023-05-12 02:01:07.418 [WARNING][1] labels.go 85: Unexpected kind received over syncer: IPPool(default-ipv4-ippool)
2023-05-12 02:01:07.418 [WARNING][1] labels.go 85: Unexpected kind received over syncer: ClusterInformation(default)
2023-05-12 02:01:07.418 [INFO][1] watchersyncer.go 130: Sending status update Status=in-sync
2023-05-12 02:01:07.418 [INFO][1] syncer.go 86: Node controller syncer status updated: in-sync
2023-05-12 02:01:07.428 [INFO][1] hostendpoints.go 173: successfully synced all hostendpoints
I0512 02:01:07.502774 1 shared_informer.go:262] Caches are synced for service-accounts
2023-05-12 02:01:07.502 [INFO][1] serviceaccount_controller.go 170: ServiceAccount/Profile controller is now running
I0512 02:01:07.505930 1 shared_informer.go:262] Caches are synced for namespaces
2023-05-12 02:01:07.505 [INFO][1] namespace_controller.go 176: Namespace/Profile controller is now running
2023-05-12 02:01:07.506 [WARNING][1] cache.go 278: Value for key is missing in datastore, queueing update to reprogram key="ksa.kube-system.calico-node" type="ServiceAccount"
2023-05-12 02:01:07.507 [WARNING][1] cache.go 278: Value for key is missing in datastore, queueing update to reprogram key="ksa.kube-system.calico-kube-controllers" type="ServiceAccount"
I0512 02:01:07.507339 1 shared_informer.go:262] Caches are synced for nodes
I0512 02:01:07.507355 1 shared_informer.go:255] Waiting for caches to sync for pods
I0512 02:01:07.507408 1 shared_informer.go:262] Caches are synced for pods
2023-05-12 02:01:07.507 [INFO][1] ipam.go 253: Will run periodic IPAM sync every 7m30s
2023-05-12 02:01:07.507 [INFO][1] ipam.go 331: Syncer is InSync, kicking sync channel status=in-sync
2023-05-12 02:01:07.508 [INFO][1] serviceaccount_controller.go 222: Create/Update ServiceAccount Profile in Calico datastore key="ksa.kube-system.calico-node"
I0512 02:01:07.508403 1 shared_informer.go:262] Caches are synced for network-policies
2023-05-12 02:01:07.508 [INFO][1] policy_controller.go 171: NetworkPolicy controller is now running
2023-05-12 02:01:07.516 [INFO][1] serviceaccount_controller.go 239: Successfully created ServiceAccount profile key="ksa.kube-system.calico-node"
2023-05-12 02:01:07.516 [INFO][1] serviceaccount_controller.go 222: Create/Update ServiceAccount Profile in Calico datastore key="ksa.kube-system.calico-kube-controllers"
I0512 02:01:07.517666 1 shared_informer.go:262] Caches are synced for pods
2023-05-12 02:01:07.517 [INFO][1] pod_controller.go 250: Pod/WorkloadEndpoint controller is now running
2023-05-12 02:01:07.570 [INFO][1] serviceaccount_controller.go 239: Successfully created ServiceAccount profile key="ksa.kube-system.calico-kube-controllers"
2023-05-12 02:04:35.680 [INFO][1] serviceaccount_controller.go 222: Create/Update ServiceAccount Profile in Calico datastore key="ksa.kube-system.coredns"
2023-05-12 02:04:35.691 [INFO][1] serviceaccount_controller.go 239: Successfully created ServiceAccount profile key="ksa.kube-system.coredns"
Defaulted container "calico-node" out of: calico-node, install-cni (init), mount-bpffs (init)
2023-05-12 02:01:15.341 [INFO][9] startup/startup.go 427: Early log level set to info
2023-05-12 02:01:15.342 [INFO][9] startup/utils.go 129: Using stored node name k8s-master01 from /var/lib/calico/nodename
2023-05-12 02:01:15.342 [INFO][9] startup/utils.go 139: Determined node name: k8s-master01
2023-05-12 02:01:15.342 [INFO][9] startup/startup.go 94: Starting node k8s-master01 with version v3.24.5
2023-05-12 02:01:15.344 [INFO][9] startup/startup.go 106: Skipping datastore connection test
2023-05-12 02:01:15.387 [INFO][9] startup/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:01:15.388 [INFO][9] startup/startup.go 701: No AS number configured on node resource, using global value
2023-05-12 02:01:15.413 [INFO][9] startup/startup.go 676: FELIX_IPV6SUPPORT is false through environment variable
2023-05-12 02:01:15.423 [INFO][9] startup/startup.go 218: Using node name: k8s-master01
2023-05-12 02:01:15.423 [INFO][9] startup/utils.go 191: Setting NetworkUnavailable to false
2023-05-12 02:01:15.573 [INFO][19] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "wireguardmtu"="0"
2023-05-12 02:01:15.574 [INFO][19] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "vxlanmtu"="0"
2023-05-12 02:01:15.574 [INFO][19] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "healthenabled"="true"
2023-05-12 02:01:15.574 [INFO][19] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "usagereportingenabled"="false"
2023-05-12 02:01:15.574 [INFO][19] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "defaultendpointtohostaction"="ACCEPT"
2023-05-12 02:01:15.574 [INFO][19] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "ipinipmtu"="0"
2023-05-12 02:01:15.575 [INFO][19] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "ipv6support"="false"
2023-05-12 02:01:15.575 [INFO][19] tunnel-ip-allocator/config_params.go 435: Merging in config from environment variable: map[defaultendpointtohostaction:ACCEPT healthenabled:true ipinipmtu:0 ipv6support:false usagereportingenabled:false vxlanmtu:0 wireguardmtu:0]
2023-05-12 02:01:15.575 [INFO][19] tunnel-ip-allocator/config_params.go 542: Parsing value for IpInIpMtu: 0 (from environment variable)
2023-05-12 02:01:15.575 [INFO][19] tunnel-ip-allocator/config_params.go 578: Parsed value for IpInIpMtu: 0 (from environment variable)
2023-05-12 02:01:15.575 [INFO][19] tunnel-ip-allocator/config_params.go 542: Parsing value for Ipv6Support: false (from environment variable)
2023-05-12 02:01:15.576 [INFO][19] tunnel-ip-allocator/config_params.go 578: Parsed value for Ipv6Support: false (from environment variable)
2023-05-12 02:01:15.576 [INFO][19] tunnel-ip-allocator/config_params.go 542: Parsing value for WireguardMTU: 0 (from environment variable)
2023-05-12 02:01:15.576 [INFO][19] tunnel-ip-allocator/config_params.go 578: Parsed value for WireguardMTU: 0 (from environment variable)
2023-05-12 02:01:15.576 [INFO][19] tunnel-ip-allocator/config_params.go 542: Parsing value for VXLANMTU: 0 (from environment variable)
2023-05-12 02:01:15.577 [INFO][19] tunnel-ip-allocator/config_params.go 578: Parsed value for VXLANMTU: 0 (from environment variable)
2023-05-12 02:01:15.577 [INFO][19] tunnel-ip-allocator/config_params.go 542: Parsing value for HealthEnabled: true (from environment variable)
2023-05-12 02:01:15.577 [INFO][19] tunnel-ip-allocator/config_params.go 578: Parsed value for HealthEnabled: true (from environment variable)
2023-05-12 02:01:15.578 [INFO][19] tunnel-ip-allocator/config_params.go 542: Parsing value for UsageReportingEnabled: false (from environment variable)
2023-05-12 02:01:15.578 [INFO][19] tunnel-ip-allocator/config_params.go 578: Parsed value for UsageReportingEnabled: false (from environment variable)
2023-05-12 02:01:15.578 [INFO][19] tunnel-ip-allocator/config_params.go 542: Parsing value for DefaultEndpointToHostAction: ACCEPT (from environment variable)
2023-05-12 02:01:15.578 [INFO][19] tunnel-ip-allocator/config_params.go 578: Parsed value for DefaultEndpointToHostAction: ACCEPT (from environment variable)
2023-05-12 02:01:15.647 [INFO][19] tunnel-ip-allocator/allocateip.go 340: Current address is still valid, do nothing currentAddr="172.25.244.192" type="ipipTunnelAddress"
Calico node started successfully
bird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory
bird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory
2023-05-12 02:01:17.089 [INFO][82] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "wireguardmtu"="0"
2023-05-12 02:01:17.089 [INFO][82] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "vxlanmtu"="0"
2023-05-12 02:01:17.089 [INFO][82] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "healthenabled"="true"
2023-05-12 02:01:17.090 [INFO][82] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "usagereportingenabled"="false"
2023-05-12 02:01:17.090 [INFO][82] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "defaultendpointtohostaction"="ACCEPT"
2023-05-12 02:01:17.090 [INFO][82] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "ipinipmtu"="0"
2023-05-12 02:01:17.090 [INFO][82] tunnel-ip-allocator/env_var_loader.go 40: Found felix environment variable: "ipv6support"="false"
2023-05-12 02:01:17.091 [INFO][82] tunnel-ip-allocator/config_params.go 435: Merging in config from environment variable: map[defaultendpointtohostaction:ACCEPT healthenabled:true ipinipmtu:0 ipv6support:false usagereportingenabled:false vxlanmtu:0 wireguardmtu:0]
2023-05-12 02:01:17.091 [INFO][82] tunnel-ip-allocator/config_params.go 542: Parsing value for DefaultEndpointToHostAction: ACCEPT (from environment variable)
2023-05-12 02:01:17.091 [INFO][82] tunnel-ip-allocator/config_params.go 578: Parsed value for DefaultEndpointToHostAction: ACCEPT (from environment variable)
2023-05-12 02:01:17.091 [INFO][82] tunnel-ip-allocator/config_params.go 542: Parsing value for IpInIpMtu: 0 (from environment variable)
2023-05-12 02:01:17.091 [INFO][82] tunnel-ip-allocator/config_params.go 578: Parsed value for IpInIpMtu: 0 (from environment variable)
2023-05-12 02:01:17.091 [INFO][82] tunnel-ip-allocator/config_params.go 542: Parsing value for Ipv6Support: false (from environment variable)
2023-05-12 02:01:17.091 [INFO][82] tunnel-ip-allocator/config_params.go 578: Parsed value for Ipv6Support: false (from environment variable)
2023-05-12 02:01:17.092 [INFO][82] tunnel-ip-allocator/config_params.go 542: Parsing value for WireguardMTU: 0 (from environment variable)
2023-05-12 02:01:17.092 [INFO][82] tunnel-ip-allocator/config_params.go 578: Parsed value for WireguardMTU: 0 (from environment variable)
2023-05-12 02:01:17.093 [INFO][82] tunnel-ip-allocator/config_params.go 542: Parsing value for VXLANMTU: 0 (from environment variable)
2023-05-12 02:01:17.094 [INFO][82] tunnel-ip-allocator/config_params.go 578: Parsed value for VXLANMTU: 0 (from environment variable)
2023-05-12 02:01:17.094 [INFO][82] tunnel-ip-allocator/config_params.go 542: Parsing value for HealthEnabled: true (from environment variable)
2023-05-12 02:01:17.094 [INFO][82] tunnel-ip-allocator/config_params.go 578: Parsed value for HealthEnabled: true (from environment variable)
2023-05-12 02:01:17.095 [INFO][82] tunnel-ip-allocator/config_params.go 542: Parsing value for UsageReportingEnabled: false (from environment variable)
2023-05-12 02:01:17.096 [INFO][82] tunnel-ip-allocator/config_params.go 578: Parsed value for UsageReportingEnabled: false (from environment variable)
2023-05-12 02:01:17.097 [INFO][82] tunnel-ip-allocator/watchersyncer.go 89: Start called
2023-05-12 02:01:17.097 [INFO][82] tunnel-ip-allocator/watchersyncer.go 130: Sending status update Status=wait-for-ready
2023-05-12 02:01:17.098 [INFO][82] tunnel-ip-allocator/watchersyncer.go 149: Starting main event processing loop
2023-05-12 02:01:17.098 [INFO][82] tunnel-ip-allocator/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/nodes/k8s-master01"
2023-05-12 02:01:17.098 [INFO][82] tunnel-ip-allocator/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/ippools"
2023-05-12 02:01:17.111 [INFO][82] tunnel-ip-allocator/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/nodes/k8s-master01"
2023-05-12 02:01:17.112 [INFO][82] tunnel-ip-allocator/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/ippools"
2023-05-12 02:01:17.112 [INFO][82] tunnel-ip-allocator/watchersyncer.go 130: Sending status update Status=resync
2023-05-12 02:01:17.112 [INFO][82] tunnel-ip-allocator/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.112 [INFO][82] tunnel-ip-allocator/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.113 [INFO][82] tunnel-ip-allocator/watchersyncer.go 221: All watchers have sync'd data - sending data and final sync
2023-05-12 02:01:17.113 [INFO][82] tunnel-ip-allocator/watchersyncer.go 130: Sending status update Status=in-sync
2023-05-12 02:01:17.116 [INFO][79] confd/config.go 82: Skipping confd config file.
2023-05-12 02:01:17.119 [INFO][79] confd/run.go 18: Starting calico-confd
W0512 02:01:17.141897 78 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2023-05-12 02:01:17.150 [INFO][82] tunnel-ip-allocator/allocateip.go 340: Current address is still valid, do nothing currentAddr="172.25.244.192" type="ipipTunnelAddress"
2023-05-12 02:01:17.169 [INFO][78] cni-config-monitor/token_watch.go 225: Update of CNI kubeconfig triggered based on elapsed time.
2023-05-12 02:01:17.171 [INFO][78] cni-config-monitor/token_watch.go 279: Wrote updated CNI kubeconfig file. path="/host/etc/cni/net.d/calico-kubeconfig"
2023-05-12 02:01:17.174 [INFO][77] status-reporter/startup.go 427: Early log level set to info
2023-05-12 02:01:17.176 [INFO][77] status-reporter/watchersyncer.go 89: Start called
2023-05-12 02:01:17.176 [INFO][77] status-reporter/watchersyncer.go 130: Sending status update Status=wait-for-ready
2023-05-12 02:01:17.177 [INFO][77] status-reporter/watchersyncer.go 149: Starting main event processing loop
2023-05-12 02:01:17.177 [INFO][77] status-reporter/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/caliconodestatuses"
W0512 02:01:17.187221 79 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2023-05-12 02:01:17.187 [INFO][79] confd/client.go 1419: Advertise global service ranges from this node
2023-05-12 02:01:17.188 [INFO][79] confd/client.go 1364: Updated with new cluster IP CIDRs: []
2023-05-12 02:01:17.188 [INFO][79] confd/client.go 1419: Advertise global service ranges from this node
2023-05-12 02:01:17.188 [INFO][79] confd/client.go 1355: Updated with new external IP CIDRs: []
2023-05-12 02:01:17.188 [INFO][79] confd/client.go 1419: Advertise global service ranges from this node
2023-05-12 02:01:17.188 [INFO][79] confd/client.go 1374: Updated with new Loadbalancer IP CIDRs: []
2023-05-12 02:01:17.188 [INFO][79] confd/watchersyncer.go 89: Start called
2023-05-12 02:01:17.188 [INFO][79] confd/watchersyncer.go 130: Sending status update Status=wait-for-ready
2023-05-12 02:01:17.188 [INFO][79] confd/client.go 422: Source SourceRouteGenerator readiness changed, ready=true
2023-05-12 02:01:17.189 [INFO][79] confd/watchersyncer.go 149: Starting main event processing loop
2023-05-12 02:01:17.189 [INFO][79] confd/watchercache.go 181: Full resync is required ListRoot="/calico/ipam/v2/host/k8s-master01"
2023-05-12 02:01:17.189 [INFO][79] confd/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/ippools"
2023-05-12 02:01:17.190 [INFO][79] confd/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/bgpconfigurations"
2023-05-12 02:01:17.190 [INFO][79] confd/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/nodes"
2023-05-12 02:01:17.190 [INFO][79] confd/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/bgppeers"
2023-05-12 02:01:17.192 [INFO][77] status-reporter/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/caliconodestatuses"
2023-05-12 02:01:17.193 [INFO][77] status-reporter/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.193 [INFO][77] status-reporter/watchersyncer.go 130: Sending status update Status=resync
2023-05-12 02:01:17.193 [INFO][77] status-reporter/watchersyncer.go 221: All watchers have sync'd data - sending data and final sync
2023-05-12 02:01:17.193 [INFO][77] status-reporter/watchersyncer.go 130: Sending status update Status=in-sync
2023-05-12 02:01:17.196 [INFO][83] monitor-addresses/startup.go 427: Early log level set to info
2023-05-12 02:01:17.205 [INFO][79] confd/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/nodes"
2023-05-12 02:01:17.205 [INFO][79] confd/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/bgppeers"
2023-05-12 02:01:17.205 [INFO][79] confd/watchercache.go 294: Sending synced update ListRoot="/calico/ipam/v2/host/k8s-master01"
2023-05-12 02:01:17.206 [INFO][79] confd/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/bgpconfigurations"
2023-05-12 02:01:17.206 [INFO][79] confd/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/ippools"
2023-05-12 02:01:17.208 [INFO][79] confd/watchersyncer.go 130: Sending status update Status=resync
2023-05-12 02:01:17.208 [INFO][79] confd/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.208 [INFO][79] confd/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.209 [INFO][79] confd/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.209 [INFO][79] confd/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.209 [INFO][83] monitor-addresses/utils.go 127: Using NODENAME environment for node name k8s-master01
2023-05-12 02:01:17.212 [INFO][79] confd/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.209 [INFO][83] monitor-addresses/utils.go 139: Determined node name: k8s-master01
2023-05-12 02:01:17.212 [INFO][79] confd/watchersyncer.go 221: All watchers have sync'd data - sending data and final sync
2023-05-12 02:01:17.212 [INFO][79] confd/watchersyncer.go 130: Sending status update Status=in-sync
2023-05-12 02:01:17.218 [INFO][79] confd/client.go 995: Recompute BGP peerings: HostBGPConfig(node=k8s-master01; name=ip_addr_v4) updated; HostBGPConfig(node=k8s-master01; name=ip_addr_v6) updated; HostBGPConfig(node=k8s-master01; name=network_v4) updated; HostBGPConfig(node=k8s-master01; name=rr_cluster_id) updated; k8s-master01 updated; HostBGPConfig(node=k8s-node01; name=ip_addr_v4) updated; HostBGPConfig(node=k8s-node01; name=ip_addr_v6) updated; HostBGPConfig(node=k8s-node01; name=network_v4) updated; HostBGPConfig(node=k8s-node01; name=rr_cluster_id) updated; k8s-node01 updated; HostBGPConfig(node=k8s-node02; name=ip_addr_v4) updated; HostBGPConfig(node=k8s-node02; name=ip_addr_v6) updated; HostBGPConfig(node=k8s-node02; name=network_v4) updated; HostBGPConfig(node=k8s-node02; name=rr_cluster_id) updated; k8s-node02 updated
2023-05-12 02:01:17.218 [INFO][79] confd/client.go 422: Source SourceSyncer readiness changed, ready=true
2023-05-12 02:01:17.218 [INFO][79] confd/client.go 442: Data is now syncd, can start rendering templates
2023-05-12 02:01:17.250 [ERROR][79] confd/resource.go 306: Error from checkcmd "bird -p -c /etc/calico/confd/config/.bird.cfg4157422690": "bird: /etc/calico/confd/config/.bird.cfg4157422690:6:1 Unable to open included file /etc/calico/confd/config/bird_aggr.cfg: No such file or directory\n"
2023-05-12 02:01:17.250 [INFO][79] confd/resource.go 240: Check failed, but file does not yet exist - create anyway
2023-05-12 02:01:17.252 [INFO][79] confd/resource.go 278: Target config /etc/calico/confd/config/bird_ipam.cfg has been updated
2023-05-12 02:01:17.253 [INFO][79] confd/resource.go 278: Target config /etc/calico/confd/config/bird_aggr.cfg has been updated
2023-05-12 02:01:17.258 [INFO][79] confd/resource.go 278: Target config /etc/calico/confd/config/bird6_aggr.cfg has been updated
2023-05-12 02:01:17.262 [INFO][79] confd/resource.go 278: Target config /etc/calico/confd/config/bird6_ipam.cfg has been updated
2023-05-12 02:01:17.263 [INFO][79] confd/resource.go 278: Target config /etc/calico/confd/config/bird.cfg has been updated
2023-05-12 02:01:17.266 [INFO][79] confd/resource.go 278: Target config /etc/calico/confd/config/bird6.cfg has been updated
2023-05-12 02:01:17.328 [INFO][76] felix/daemon.go 373: Successfully loaded configuration. GOMAXPROCS=2 builddate="2022-11-08T00:15:45+0000" config=&config.Config{UseInternalDataplaneDriver:true, DataplaneDriver:"calico-iptables-plugin", DataplaneWatchdogTimeout:90000000000, WireguardEnabled:false, WireguardEnabledV6:false, WireguardListeningPort:51820, WireguardListeningPortV6:51821, WireguardRoutingRulePriority:99, WireguardInterfaceName:"wireguard.cali", WireguardInterfaceNameV6:"wg-v6.cali", WireguardMTU:0, WireguardMTUV6:0, WireguardHostEncryptionEnabled:false, WireguardPersistentKeepAlive:0, BPFEnabled:false, BPFDisableUnprivileged:true, BPFLogLevel:"off", BPFDataIfacePattern:(*regexp.Regexp)(0xc0005ca1e0), BPFConnectTimeLoadBalancingEnabled:true, BPFExternalServiceMode:"tunnel", BPFKubeProxyIptablesCleanupEnabled:true, BPFKubeProxyMinSyncPeriod:1000000000, BPFKubeProxyEndpointSlicesEnabled:true, BPFExtToServiceConnmark:0, BPFPSNATPorts:numorstring.Port{MinPort:0x4e20, MaxPort:0x752f, PortName:""}, BPFMapSizeNATFrontend:65536, BPFMapSizeNATBackend:262144, BPFMapSizeNATAffinity:65536, BPFMapSizeRoute:262144, BPFMapSizeConntrack:512000, BPFMapSizeIPSets:1048576, BPFMapSizeIfState:1000, BPFHostConntrackBypass:true, BPFEnforceRPF:"Strict", BPFPolicyDebugEnabled:true, DebugBPFCgroupV2:"", DebugBPFMapRepinEnabled:false, DatastoreType:"etcdv3", FelixHostname:"k8s-master01", EtcdAddr:"127.0.0.1:2379", EtcdScheme:"http", EtcdKeyFile:"/calico-secrets/etcd-key", EtcdCertFile:"/calico-secrets/etcd-cert", EtcdCaFile:"/calico-secrets/etcd-ca", EtcdEndpoints:[]string{"https://192.168.56.61:2379/"}, TyphaAddr:"", TyphaK8sServiceName:"", TyphaK8sNamespace:"kube-system", TyphaReadTimeout:30000000000, TyphaWriteTimeout:10000000000, TyphaKeyFile:"", TyphaCertFile:"", TyphaCAFile:"", TyphaCN:"", TyphaURISAN:"", Ipv6Support:false, BpfIpv6Support:false, IptablesBackend:"auto", RouteRefreshInterval:90000000000, InterfaceRefreshInterval:90000000000, DeviceRouteSourceAddress:net.IP(nil), DeviceRouteSourceAddressIPv6:net.IP(nil), DeviceRouteProtocol:3, RemoveExternalRoutes:true, IptablesRefreshInterval:90000000000, IptablesPostWriteCheckIntervalSecs:1000000000, IptablesLockFilePath:"/run/xtables.lock", IptablesLockTimeoutSecs:0, IptablesLockProbeIntervalMillis:50000000, FeatureDetectOverride:map[string]string(nil), IpsetsRefreshInterval:10000000000, MaxIpsetSize:1048576, XDPRefreshInterval:90000000000, PolicySyncPathPrefix:"", NetlinkTimeoutSecs:10000000000, MetadataAddr:"", MetadataPort:8775, OpenstackRegion:"", InterfacePrefix:"cali", InterfaceExclude:[]*regexp.Regexp{(*regexp.Regexp)(0xc0005ca320)}, ChainInsertMode:"insert", DefaultEndpointToHostAction:"ACCEPT", IptablesFilterAllowAction:"ACCEPT", IptablesMangleAllowAction:"ACCEPT", LogPrefix:"calico-packet", LogFilePath:"", LogSeverityFile:"", LogSeverityScreen:"INFO", LogSeveritySys:"", LogDebugFilenameRegex:(*regexp.Regexp)(nil), VXLANEnabled:(*bool)(nil), VXLANPort:4789, VXLANVNI:4096, VXLANMTU:0, VXLANMTUV6:0, IPv4VXLANTunnelAddr:net.IP(nil), IPv6VXLANTunnelAddr:net.IP(nil), VXLANTunnelMACAddr:"", VXLANTunnelMACAddrV6:"", IpInIpEnabled:(*bool)(nil), IpInIpMtu:0, IpInIpTunnelAddr:net.IP{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0xff, 0xac, 0x19, 0xf4, 0xc0}, FloatingIPs:"Disabled", AllowVXLANPacketsFromWorkloads:false, AllowIPIPPacketsFromWorkloads:false, AWSSrcDstCheck:"DoNothing", ServiceLoopPrevention:"Drop", WorkloadSourceSpoofing:"Disabled", ReportingIntervalSecs:0, ReportingTTLSecs:90000000000, EndpointReportingEnabled:false, EndpointReportingDelaySecs:1000000000, IptablesMarkMask:0xffff0000, DisableConntrackInvalidCheck:false, HealthEnabled:true, HealthPort:9099, HealthHost:"localhost", PrometheusMetricsEnabled:false, PrometheusMetricsHost:"", PrometheusMetricsPort:9091, PrometheusGoMetricsEnabled:true, PrometheusProcessMetricsEnabled:true, PrometheusWireGuardMetricsEnabled:true, FailsafeInboundHostPorts:[]config.ProtoPort{config.ProtoPort{Net:"", Protocol:"tcp", Port:0x16}, config.ProtoPort{Net:"", Protocol:"udp", Port:0x44}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0xb3}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x94b}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x94c}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1561}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x192b}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1a0a}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1a0b}}, FailsafeOutboundHostPorts:[]config.ProtoPort{config.ProtoPort{Net:"", Protocol:"udp", Port:0x35}, config.ProtoPort{Net:"", Protocol:"udp", Port:0x43}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0xb3}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x94b}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x94c}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1561}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x192b}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1a0a}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1a0b}}, KubeNodePortRanges:[]numorstring.Port{numorstring.Port{MinPort:0x7530, MaxPort:0x7fff, PortName:""}}, NATPortRange:numorstring.Port{MinPort:0x0, MaxPort:0x0, PortName:""}, NATOutgoingAddress:net.IP(nil), UsageReportingEnabled:false, UsageReportingInitialDelaySecs:300000000000, UsageReportingIntervalSecs:86400000000000, ClusterGUID:"ec9173da4f474fd597fefc04d70fe35c", ClusterType:"k8s,bgp", CalicoVersion:"v3.24.5", ExternalNodesCIDRList:[]string(nil), DebugMemoryProfilePath:"", DebugCPUProfilePath:"/tmp/felix-cpu-<timestamp>.pprof", DebugDisableLogDropping:false, DebugSimulateCalcGraphHangAfter:0, DebugSimulateDataplaneHangAfter:0, DebugPanicAfter:0, DebugSimulateDataRace:false, RouteSource:"CalicoIPAM", RouteTableRange:idalloc.IndexRange{Min:0, Max:0}, RouteTableRanges:[]idalloc.IndexRange(nil), RouteSyncDisabled:false, IptablesNATOutgoingInterfaceFilter:"", SidecarAccelerationEnabled:false, XDPEnabled:true, GenericXDPEnabled:false, Variant:"Calico", MTUIfacePattern:(*regexp.Regexp)(0xc0003841e0), Encapsulation:config.Encapsulation{IPIPEnabled:true, VXLANEnabled:false, VXLANEnabledV6:false}, internalOverrides:map[string]string{}, sourceToRawConfig:map[config.Source]map[string]string{0x1:map[string]string{"CalicoVersion":"v3.24.5", "ClusterGUID":"ec9173da4f474fd597fefc04d70fe35c", "ClusterType":"k8s,bgp", "FloatingIPs":"Disabled", "LogSeverityScreen":"Info", "ReportingIntervalSecs":"0"}, 0x2:map[string]string{"DefaultEndpointToHostAction":"Return", "FloatingIPs":"Disabled", "IpInIpTunnelAddr":"172.25.244.192"}, 0x3:map[string]string{"LogFilePath":"None", "LogSeverityFile":"None", "LogSeveritySys":"None", "MetadataAddr":"None"}, 0x4:map[string]string{"defaultendpointtohostaction":"ACCEPT", "etcdcafile":"/calico-secrets/etcd-ca", "etcdcertfile":"/calico-secrets/etcd-cert", "etcdendpoints":"https://192.168.56.61:2379", "etcdkeyfile":"/calico-secrets/etcd-key", "felixhostname":"k8s-master01", "healthenabled":"true", "ipinipmtu":"0", "ipv6support":"false", "usagereportingenabled":"false", "vxlanmtu":"0", "wireguardmtu":"0"}}, rawValues:map[string]string{"CalicoVersion":"v3.24.5", "ClusterGUID":"ec9173da4f474fd597fefc04d70fe35c", "ClusterType":"k8s,bgp", "DefaultEndpointToHostAction":"ACCEPT", "EtcdCaFile":"/calico-secrets/etcd-ca", "EtcdCertFile":"/calico-secrets/etcd-cert", "EtcdEndpoints":"https://192.168.56.61:2379", "EtcdKeyFile":"/calico-secrets/etcd-key", "FelixHostname":"k8s-master01", "FloatingIPs":"Disabled", "HealthEnabled":"true", "IpInIpMtu":"0", "IpInIpTunnelAddr":"172.25.244.192", "Ipv6Support":"false", "LogFilePath":"None", "LogSeverityFile":"None", "LogSeverityScreen":"Info", "LogSeveritySys":"None", "MetadataAddr":"None", "ReportingIntervalSecs":"0", "UsageReportingEnabled":"false", "VXLANMTU":"0", "WireguardMTU":"0"}, Err:error(nil), loadClientConfigFromEnvironment:(func() (*apiconfig.CalicoAPIConfig, error))(0x144f0c0), useNodeResourceUpdates:false} gitcommit="f1a1611acb98d9187f48bbbe2227301aa69f0499" version="v3.24.5"
2023-05-12 02:01:17.329 [INFO][76] felix/bootstrap.go 209: Wireguard is not enabled - ensure no wireguard config iface="wireguard.cali" ipVersion=0x4 nodeName="k8s-master01"
2023-05-12 02:01:17.334 [INFO][76] felix/bootstrap.go 624: Wireguard public key not set in datastore ipVersion=0x4 nodeName="k8s-master01"
2023-05-12 02:01:17.334 [INFO][76] felix/bootstrap.go 209: Wireguard is not enabled - ensure no wireguard config iface="wg-v6.cali" ipVersion=0x6 nodeName="k8s-master01"
2023-05-12 02:01:17.337 [INFO][76] felix/bootstrap.go 624: Wireguard public key not set in datastore ipVersion=0x6 nodeName="k8s-master01"
2023-05-12 02:01:17.337 [INFO][76] felix/driver.go 72: Using internal (linux) dataplane driver.
2023-05-12 02:01:17.338 [INFO][76] felix/driver.go 81: Kube-proxy in ipvs mode, enabling felix kube-proxy ipvs support.
2023-05-12 02:01:17.338 [INFO][76] felix/driver.go 157: Calculated iptables mark bits acceptMark=0x10000 endpointMark=0xfff00000 endpointMarkNonCali=0x100000 passMark=0x20000 scratch0Mark=0x40000 scratch1Mark=0x80000
2023-05-12 02:01:17.338 [INFO][76] felix/int_dataplane.go 336: Creating internal dataplane driver. config=intdataplane.Config{Hostname:"k8s-master01", IPv6Enabled:false, RuleRendererOverride:rules.RuleRenderer(nil), IPIPMTU:0, VXLANMTU:0, VXLANMTUV6:0, VXLANPort:4789, MaxIPSetSize:1048576, RouteSyncDisabled:false, IptablesBackend:"auto", IPSetsRefreshInterval:10000000000, RouteRefreshInterval:90000000000, DeviceRouteSourceAddress:net.IP(nil), DeviceRouteSourceAddressIPv6:net.IP(nil), DeviceRouteProtocol:3, RemoveExternalRoutes:true, IptablesRefreshInterval:90000000000, IptablesPostWriteCheckInterval:1000000000, IptablesInsertMode:"insert", IptablesLockFilePath:"/run/xtables.lock", IptablesLockTimeout:0, IptablesLockProbeInterval:50000000, XDPRefreshInterval:90000000000, FloatingIPsEnabled:false, Wireguard:wireguard.Config{Enabled:false, EnabledV6:false, ListeningPort:51820, ListeningPortV6:51821, FirewallMark:0, RoutingRulePriority:99, RoutingTableIndex:1, RoutingTableIndexV6:2, InterfaceName:"wireguard.cali", InterfaceNameV6:"wg-v6.cali", MTU:0, MTUV6:0, RouteSource:"CalicoIPAM", EncryptHostTraffic:false, PersistentKeepAlive:0, RouteSyncDisabled:false}, NetlinkTimeout:10000000000, RulesConfig:rules.Config{IPSetConfigV4:(*ipsets.IPVersionConfig)(0xc0000a4280), IPSetConfigV6:(*ipsets.IPVersionConfig)(0xc0000a4370), WorkloadIfacePrefixes:[]string{"cali"}, IptablesMarkAccept:0x10000, IptablesMarkPass:0x20000, IptablesMarkScratch0:0x40000, IptablesMarkScratch1:0x80000, IptablesMarkEndpoint:0xfff00000, IptablesMarkNonCaliEndpoint:0x100000, KubeNodePortRanges:[]numorstring.Port{numorstring.Port{MinPort:0x7530, MaxPort:0x7fff, PortName:""}}, KubeIPVSSupportEnabled:true, OpenStackMetadataIP:net.IP(nil), OpenStackMetadataPort:0x2247, OpenStackSpecialCasesEnabled:false, VXLANEnabled:false, VXLANEnabledV6:false, VXLANPort:4789, VXLANVNI:4096, IPIPEnabled:true, FelixConfigIPIPEnabled:(*bool)(nil), IPIPTunnelAddress:net.IP{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0xff, 0xac, 0x19, 0xf4, 0xc0}, VXLANTunnelAddress:net.IP(nil), VXLANTunnelAddressV6:net.IP(nil), AllowVXLANPacketsFromWorkloads:false, AllowIPIPPacketsFromWorkloads:false, WireguardEnabled:false, WireguardEnabledV6:false, WireguardInterfaceName:"wireguard.cali", WireguardInterfaceNameV6:"wg-v6.cali", WireguardIptablesMark:0x0, WireguardListeningPort:51820, WireguardListeningPortV6:51821, WireguardEncryptHostTraffic:false, RouteSource:"CalicoIPAM", IptablesLogPrefix:"calico-packet", EndpointToHostAction:"ACCEPT", IptablesFilterAllowAction:"ACCEPT", IptablesMangleAllowAction:"ACCEPT", FailsafeInboundHostPorts:[]config.ProtoPort{config.ProtoPort{Net:"", Protocol:"tcp", Port:0x16}, config.ProtoPort{Net:"", Protocol:"udp", Port:0x44}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0xb3}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x94b}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x94c}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1561}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x192b}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1a0a}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1a0b}}, FailsafeOutboundHostPorts:[]config.ProtoPort{config.ProtoPort{Net:"", Protocol:"udp", Port:0x35}, config.ProtoPort{Net:"", Protocol:"udp", Port:0x43}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0xb3}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x94b}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x94c}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1561}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x192b}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1a0a}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1a0b}}, DisableConntrackInvalid:false, NATPortRange:numorstring.Port{MinPort:0x0, MaxPort:0x0, PortName:""}, IptablesNATOutgoingInterfaceFilter:"", NATOutgoingAddress:net.IP(nil), BPFEnabled:false, ServiceLoopPrevention:"Drop"}, IfaceMonitorConfig:ifacemonitor.Config{InterfaceExcludes:[]*regexp.Regexp{(*regexp.Regexp)(0xc0005ca320)}, ResyncInterval:90000000000}, StatusReportingInterval:0, ConfigChangedRestartCallback:(func())(0x259e3a0), FatalErrorRestartCallback:(func(error))(0x259e280), PostInSyncCallback:(func())(0x258bf40), HealthAggregator:(*health.HealthAggregator)(0xc0001bed20), WatchdogTimeout:90000000000, RouteTableManager:(*idalloc.IndexAllocator)(0xc0002669e0), DebugSimulateDataplaneHangAfter:0, ExternalNodesCidrs:[]string(nil), BPFEnabled:false, BPFPolicyDebugEnabled:true, BPFDisableUnprivileged:true, BPFKubeProxyIptablesCleanupEnabled:true, BPFLogLevel:"off", BPFExtToServiceConnmark:0, BPFDataIfacePattern:(*regexp.Regexp)(0xc0005ca1e0), XDPEnabled:true, XDPAllowGeneric:false, BPFConntrackTimeouts:conntrack.Timeouts{CreationGracePeriod:10000000000, TCPPreEstablished:20000000000, TCPEstablished:3600000000000, TCPFinsSeen:30000000000, TCPResetSeen:40000000000, UDPLastSeen:60000000000, GenericIPLastSeen:600000000000, ICMPLastSeen:5000000000}, BPFCgroupV2:"", BPFConnTimeLBEnabled:true, BPFMapRepin:false, BPFNodePortDSREnabled:false, BPFPSNATPorts:numorstring.Port{MinPort:0x4e20, MaxPort:0x752f, PortName:""}, BPFMapSizeRoute:262144, BPFMapSizeConntrack:512000, BPFMapSizeNATFrontend:65536, BPFMapSizeNATBackend:262144, BPFMapSizeNATAffinity:65536, BPFMapSizeIPSets:1048576, BPFMapSizeIfState:1000, BPFIpv6Enabled:false, BPFHostConntrackBypass:true, BPFEnforceRPF:"Strict", KubeProxyMinSyncPeriod:1000000000, SidecarAccelerationEnabled:false, LookPathOverride:(func(string) (string, error))(nil), KubeClientSet:(*kubernetes.Clientset)(0xc0000ef680), FeatureDetectOverrides:map[string]string(nil), hostMTU:0, MTUIfacePattern:(*regexp.Regexp)(0xc0003841e0), RouteSource:"CalicoIPAM", KubernetesProvider:0x0}
2023-05-12 02:01:17.339 [INFO][76] felix/rule_defs.go 373: Creating rule renderer. config=rules.Config{IPSetConfigV4:(*ipsets.IPVersionConfig)(0xc0000a4280), IPSetConfigV6:(*ipsets.IPVersionConfig)(0xc0000a4370), WorkloadIfacePrefixes:[]string{"cali"}, IptablesMarkAccept:0x10000, IptablesMarkPass:0x20000, IptablesMarkScratch0:0x40000, IptablesMarkScratch1:0x80000, IptablesMarkEndpoint:0xfff00000, IptablesMarkNonCaliEndpoint:0x100000, KubeNodePortRanges:[]numorstring.Port{numorstring.Port{MinPort:0x7530, MaxPort:0x7fff, PortName:""}}, KubeIPVSSupportEnabled:true, OpenStackMetadataIP:net.IP(nil), OpenStackMetadataPort:0x2247, OpenStackSpecialCasesEnabled:false, VXLANEnabled:false, VXLANEnabledV6:false, VXLANPort:4789, VXLANVNI:4096, IPIPEnabled:true, FelixConfigIPIPEnabled:(*bool)(nil), IPIPTunnelAddress:net.IP{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0xff, 0xac, 0x19, 0xf4, 0xc0}, VXLANTunnelAddress:net.IP(nil), VXLANTunnelAddressV6:net.IP(nil), AllowVXLANPacketsFromWorkloads:false, AllowIPIPPacketsFromWorkloads:false, WireguardEnabled:false, WireguardEnabledV6:false, WireguardInterfaceName:"wireguard.cali", WireguardInterfaceNameV6:"wg-v6.cali", WireguardIptablesMark:0x0, WireguardListeningPort:51820, WireguardListeningPortV6:51821, WireguardEncryptHostTraffic:false, RouteSource:"CalicoIPAM", IptablesLogPrefix:"calico-packet", EndpointToHostAction:"ACCEPT", IptablesFilterAllowAction:"ACCEPT", IptablesMangleAllowAction:"ACCEPT", FailsafeInboundHostPorts:[]config.ProtoPort{config.ProtoPort{Net:"", Protocol:"tcp", Port:0x16}, config.ProtoPort{Net:"", Protocol:"udp", Port:0x44}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0xb3}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x94b}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x94c}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1561}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x192b}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1a0a}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1a0b}}, FailsafeOutboundHostPorts:[]config.ProtoPort{config.ProtoPort{Net:"", Protocol:"udp", Port:0x35}, config.ProtoPort{Net:"", Protocol:"udp", Port:0x43}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0xb3}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x94b}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x94c}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1561}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x192b}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1a0a}, config.ProtoPort{Net:"", Protocol:"tcp", Port:0x1a0b}}, DisableConntrackInvalid:false, NATPortRange:numorstring.Port{MinPort:0x0, MaxPort:0x0, PortName:""}, IptablesNATOutgoingInterfaceFilter:"", NATOutgoingAddress:net.IP(nil), BPFEnabled:false, ServiceLoopPrevention:"Drop"}
2023-05-12 02:01:17.339 [INFO][76] felix/rule_defs.go 383: Workload to host packets will be accepted.
2023-05-12 02:01:17.339 [INFO][76] felix/rule_defs.go 397: filter table allowed packets will be accepted immediately.
2023-05-12 02:01:17.339 [INFO][76] felix/rule_defs.go 405: mangle table allowed packets will be accepted immediately.
2023-05-12 02:01:17.339 [INFO][76] felix/rule_defs.go 413: Packets to unknown service IPs will be dropped
2023-05-12 02:01:17.341 [INFO][76] felix/int_dataplane.go 1020: Determined pod MTU mtu=1480
2023-05-12 02:01:17.342 [INFO][76] felix/iface_monitor.go 84: configured to periodically rescan interfaces. interval=1m30s
2023-05-12 02:01:17.342 [INFO][76] felix/feature_detect.go 354: Looked up iptables command backendMode="legacy" candidates=[]string{"ip6tables-legacy-save", "ip6tables-save"} command="ip6tables-legacy-save" ipVersion=0x6 saveOrRestore="save"
2023-05-12 02:01:17.342 [INFO][76] felix/feature_detect.go 354: Looked up iptables command backendMode="legacy" candidates=[]string{"iptables-legacy-save", "iptables-save"} command="iptables-legacy-save" ipVersion=0x4 saveOrRestore="save"
2023-05-12 02:01:17.348 [INFO][76] felix/feature_detect.go 354: Looked up iptables command backendMode="nft" candidates=[]string{"ip6tables-nft-save", "ip6tables-save"} command="ip6tables-nft-save" ipVersion=0x6 saveOrRestore="save"
2023-05-12 02:01:17.348 [INFO][76] felix/feature_detect.go 354: Looked up iptables command backendMode="nft" candidates=[]string{"iptables-nft-save", "iptables-save"} command="iptables-nft-save" ipVersion=0x4 saveOrRestore="save"
2023-05-12 02:01:17.372 [INFO][76] felix/feature_detect.go 163: Updating detected iptables features features=environment.Features{SNATFullyRandom:true, MASQFullyRandom:true, RestoreSupportsLock:true, ChecksumOffloadBroken:false, IPIPDeviceIsL3:true} iptablesVersion=1.8.4 kernelVersion=5.14.0-162
2023-05-12 02:01:17.372 [INFO][76] felix/table.go 336: Calculated old-insert detection regex. pattern="(?:-j|--jump) cali-|(?:-j|--jump) califw-|(?:-j|--jump) calitw-|(?:-j|--jump) califh-|(?:-j|--jump) calith-|(?:-j|--jump) calipi-|(?:-j|--jump) calipo-|(?:-j|--jump) felix-"
2023-05-12 02:01:17.372 [INFO][76] felix/table.go 449: Enabling iptables-in-nftables-mode workarounds.
2023-05-12 02:01:17.373 [INFO][76] felix/feature_detect.go 354: Looked up iptables command backendMode="nft" candidates=[]string{"iptables-nft-restore", "iptables-restore"} command="iptables-nft-restore" ipVersion=0x4 saveOrRestore="restore"
2023-05-12 02:01:17.374 [INFO][76] felix/feature_detect.go 354: Looked up iptables command backendMode="nft" candidates=[]string{"iptables-nft-save", "iptables-save"} command="iptables-nft-save" ipVersion=0x4 saveOrRestore="save"
2023-05-12 02:01:17.376 [INFO][76] felix/table.go 336: Calculated old-insert detection regex. pattern="(?:-j|--jump) cali-|(?:-j|--jump) califw-|(?:-j|--jump) calitw-|(?:-j|--jump) califh-|(?:-j|--jump) calith-|(?:-j|--jump) calipi-|(?:-j|--jump) calipo-|(?:-j|--jump) felix-|-A POSTROUTING .* felix-masq-ipam-pools .*|-A POSTROUTING -o tunl0 -m addrtype ! --src-type LOCAL --limit-iface-out -m addrtype --src-type LOCAL -j MASQUERADE"
2023-05-12 02:01:17.377 [INFO][76] felix/table.go 449: Enabling iptables-in-nftables-mode workarounds.
2023-05-12 02:01:17.377 [INFO][76] felix/feature_detect.go 354: Looked up iptables command backendMode="nft" candidates=[]string{"iptables-nft-restore", "iptables-restore"} command="iptables-nft-restore" ipVersion=0x4 saveOrRestore="restore"
2023-05-12 02:01:17.377 [INFO][76] felix/feature_detect.go 354: Looked up iptables command backendMode="nft" candidates=[]string{"iptables-nft-save", "iptables-save"} command="iptables-nft-save" ipVersion=0x4 saveOrRestore="save"
2023-05-12 02:01:17.377 [INFO][76] felix/table.go 336: Calculated old-insert detection regex. pattern="(?:-j|--jump) cali-|(?:-j|--jump) califw-|(?:-j|--jump) calitw-|(?:-j|--jump) califh-|(?:-j|--jump) calith-|(?:-j|--jump) calipi-|(?:-j|--jump) calipo-|(?:-j|--jump) felix-"
2023-05-12 02:01:17.378 [INFO][76] felix/table.go 449: Enabling iptables-in-nftables-mode workarounds.
2023-05-12 02:01:17.378 [INFO][76] felix/feature_detect.go 354: Looked up iptables command backendMode="nft" candidates=[]string{"iptables-nft-restore", "iptables-restore"} command="iptables-nft-restore" ipVersion=0x4 saveOrRestore="restore"
2023-05-12 02:01:17.378 [INFO][76] felix/feature_detect.go 354: Looked up iptables command backendMode="nft" candidates=[]string{"iptables-nft-save", "iptables-save"} command="iptables-nft-save" ipVersion=0x4 saveOrRestore="save"
2023-05-12 02:01:17.378 [INFO][76] felix/table.go 336: Calculated old-insert detection regex. pattern="(?:-j|--jump) cali-|(?:-j|--jump) califw-|(?:-j|--jump) calitw-|(?:-j|--jump) califh-|(?:-j|--jump) calith-|(?:-j|--jump) calipi-|(?:-j|--jump) calipo-|(?:-j|--jump) felix-"
2023-05-12 02:01:17.379 [INFO][76] felix/table.go 449: Enabling iptables-in-nftables-mode workarounds.
2023-05-12 02:01:17.379 [INFO][76] felix/feature_detect.go 354: Looked up iptables command backendMode="nft" candidates=[]string{"iptables-nft-restore", "iptables-restore"} command="iptables-nft-restore" ipVersion=0x4 saveOrRestore="restore"
2023-05-12 02:01:17.379 [INFO][76] felix/feature_detect.go 354: Looked up iptables command backendMode="nft" candidates=[]string{"iptables-nft-save", "iptables-save"} command="iptables-nft-save" ipVersion=0x4 saveOrRestore="save"
2023-05-12 02:01:17.380 [INFO][76] felix/int_dataplane.go 514: XDP acceleration enabled.
2023-05-12 02:01:17.387 [INFO][76] felix/connecttime.go 54: Running bpftool to look up programs attached to cgroup args=[]string{"bpftool", "-j", "-p", "cgroup", "show", "/run/calico/cgroup"}
2023-05-12 02:01:17.398 [INFO][76] felix/route_table.go 317: Calculated interface name regexp ifaceRegex="^cali.*" ipVersion=0x4 tableIndex=0
2023-05-12 02:01:17.398 [INFO][76] felix/ipsets.go 130: Queueing IP set for creation family="inet" setID="all-ipam-pools" setType="hash:net"
2023-05-12 02:01:17.398 [INFO][76] felix/ipsets.go 130: Queueing IP set for creation family="inet" setID="masq-ipam-pools" setType="hash:net"
2023-05-12 02:01:17.398 [INFO][76] felix/route_table.go 317: Calculated interface name regexp ifaceRegex="^wireguard.cali$" ipVersion=0x4 tableIndex=1
2023-05-12 02:01:17.399 [INFO][76] felix/int_dataplane.go 918: Registering to report health.
2023-05-12 02:01:17.401 [INFO][76] felix/int_dataplane.go 1856: attempted to modprobe nf_conntrack_proto_sctp error=exit status 1 output=""
2023-05-12 02:01:17.402 [INFO][76] felix/int_dataplane.go 1858: Making sure IPv4 forwarding is enabled.
2023-05-12 02:01:17.402 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-failsafe-in" ipVersion=0x4 table="raw"
2023-05-12 02:01:17.402 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-failsafe-out" ipVersion=0x4 table="raw"
2023-05-12 02:01:17.402 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-PREROUTING" ipVersion=0x4 table="raw"
2023-05-12 02:01:17.402 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-rpf-skip"
2023-05-12 02:01:17.402 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-from-host-endpoint"
2023-05-12 02:01:17.403 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-wireguard-incoming-mark" ipVersion=0x4 table="raw"
2023-05-12 02:01:17.403 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-OUTPUT" ipVersion=0x4 table="raw"
2023-05-12 02:01:17.403 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-to-host-endpoint"
2023-05-12 02:01:17.403 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-PREROUTING"
2023-05-12 02:01:17.403 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-OUTPUT"
2023-05-12 02:01:17.403 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-FORWARD" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.403 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-from-hep-forward"
2023-05-12 02:01:17.403 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-from-wl-dispatch"
2023-05-12 02:01:17.403 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-to-wl-dispatch"
2023-05-12 02:01:17.404 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-to-hep-forward"
2023-05-12 02:01:17.404 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-cidr-block"
2023-05-12 02:01:17.404 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-INPUT" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.405 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-forward-check"
2023-05-12 02:01:17.405 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-wl-to-host"
2023-05-12 02:01:17.405 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-from-host-endpoint"
2023-05-12 02:01:17.405 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-wl-to-host" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.405 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-failsafe-in" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.405 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-forward-check" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.405 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-set-endpoint-mark"
2023-05-12 02:01:17.405 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-OUTPUT" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.406 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-forward-endpoint-mark"
2023-05-12 02:01:17.406 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-to-host-endpoint"
2023-05-12 02:01:17.406 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-failsafe-out" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.406 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-forward-endpoint-mark" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.406 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-from-endpoint-mark"
2023-05-12 02:01:17.406 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-FORWARD"
2023-05-12 02:01:17.406 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-INPUT"
2023-05-12 02:01:17.406 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-OUTPUT"
2023-05-12 02:01:17.406 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-PREROUTING" ipVersion=0x4 table="nat"
2023-05-12 02:01:17.406 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-fip-dnat"
2023-05-12 02:01:17.406 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-POSTROUTING" ipVersion=0x4 table="nat"
2023-05-12 02:01:17.407 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-fip-snat"
2023-05-12 02:01:17.407 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-nat-outgoing"
2023-05-12 02:01:17.407 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-OUTPUT" ipVersion=0x4 table="nat"
2023-05-12 02:01:17.408 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-PREROUTING"
2023-05-12 02:01:17.408 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-POSTROUTING"
2023-05-12 02:01:17.408 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-OUTPUT"
2023-05-12 02:01:17.408 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-failsafe-in" ipVersion=0x4 table="mangle"
2023-05-12 02:01:17.408 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-failsafe-out" ipVersion=0x4 table="mangle"
2023-05-12 02:01:17.408 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-PREROUTING" ipVersion=0x4 table="mangle"
2023-05-12 02:01:17.408 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-from-host-endpoint"
2023-05-12 02:01:17.408 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-POSTROUTING" ipVersion=0x4 table="mangle"
2023-05-12 02:01:17.408 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-to-host-endpoint"
2023-05-12 02:01:17.408 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-PREROUTING"
2023-05-12 02:01:17.408 [INFO][76] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-POSTROUTING"
2023-05-12 02:01:17.422 [INFO][76] felix/int_dataplane.go 1605: Set XDP failsafe ports: [{Net: Protocol:tcp Port:22} {Net: Protocol:udp Port:68} {Net: Protocol:tcp Port:179} {Net: Protocol:tcp Port:2379} {Net: Protocol:tcp Port:2380} {Net: Protocol:tcp Port:5473} {Net: Protocol:tcp Port:6443} {Net: Protocol:tcp Port:6666} {Net: Protocol:tcp Port:6667}]
2023-05-12 02:01:17.422 [INFO][76] felix/int_dataplane.go 1327: IPIP enabled, starting thread to keep tunnel configuration in sync.
2023-05-12 02:01:17.423 [INFO][76] felix/daemon.go 422: Connect to the dataplane driver.
2023-05-12 02:01:17.423 [INFO][76] felix/daemon.go 501: using resource updates where applicable
2023-05-12 02:01:17.423 [INFO][76] felix/int_dataplane.go 1634: Started internal iptables dataplane driver loop
2023-05-12 02:01:17.424 [INFO][76] felix/int_dataplane.go 1644: Will refresh IP sets on timer interval=1m30s
2023-05-12 02:01:17.424 [INFO][76] felix/int_dataplane.go 1654: Will refresh routes on timer interval=1m30s
2023-05-12 02:01:17.424 [INFO][76] felix/int_dataplane.go 1664: Will refresh XDP on timer interval=1m30s
2023-05-12 02:01:17.424 [INFO][76] felix/ipip_mgr.go 84: IPIP thread started.
2023-05-12 02:01:17.425 [INFO][76] felix/int_dataplane.go 2110: Started internal status report thread
2023-05-12 02:01:17.425 [INFO][76] felix/int_dataplane.go 2112: Process status reports disabled
2023-05-12 02:01:17.425 [INFO][76] felix/iface_monitor.go 109: Interface monitoring thread started.
2023-05-12 02:01:17.425 [INFO][76] felix/iface_monitor.go 127: Subscribed to netlink updates.
2023-05-12 02:01:17.425 [INFO][76] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=1 ifaceName="lo" state="up"
2023-05-12 02:01:17.426 [INFO][76] felix/int_dataplane.go 1277: Linux interface addrs changed. addrs=set.Set{127.0.0.0,127.0.0.1,::1} ifaceName="lo"
2023-05-12 02:01:17.426 [INFO][76] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=2 ifaceName="tunl0" state="up"
2023-05-12 02:01:17.426 [INFO][76] felix/int_dataplane.go 1277: Linux interface addrs changed. addrs=set.Set{172.25.244.192} ifaceName="tunl0"
2023-05-12 02:01:17.426 [INFO][76] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=3 ifaceName="enp0s8" state="up"
2023-05-12 02:01:17.426 [INFO][76] felix/int_dataplane.go 1277: Linux interface addrs changed. addrs=set.Set{10.0.3.15,fe80::a00:27ff:fefe:c60e} ifaceName="enp0s8"
2023-05-12 02:01:17.426 [INFO][76] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=4 ifaceName="enp0s17" state="up"
2023-05-12 02:01:17.427 [INFO][76] felix/int_dataplane.go 1277: Linux interface addrs changed. addrs=set.Set{192.168.56.160,192.168.56.61} ifaceName="enp0s17"
2023-05-12 02:01:17.427 [INFO][76] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=5 ifaceName="kube-ipvs0" state="down"
2023-05-12 02:01:17.428 [INFO][76] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"lo", State:"up", Index:1}
2023-05-12 02:01:17.428 [INFO][76] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"tunl0", State:"up", Index:2}
2023-05-12 02:01:17.428 [INFO][76] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"enp0s8", State:"up", Index:3}
2023-05-12 02:01:17.428 [INFO][76] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"enp0s17", State:"up", Index:4}
2023-05-12 02:01:17.428 [INFO][76] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"kube-ipvs0", State:"down", Index:5}
2023-05-12 02:01:17.428 [INFO][76] felix/int_dataplane.go 1713: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"lo", Addrs:set.Typed[string]{"127.0.0.0":set.v{}, "127.0.0.1":set.v{}, "::1":set.v{}}}
2023-05-12 02:01:17.423 [INFO][76] felix/daemon.go 504: Created Syncer syncer=&watchersyncer.watcherSyncer{status:0x0, watcherCaches:[]*watchersyncer.watcherCache{(*watchersyncer.watcherCache)(0xc00051cb40), (*watchersyncer.watcherCache)(0xc00051cbd0), (*watchersyncer.watcherCache)(0xc00051cc60), (*watchersyncer.watcherCache)(0xc00051ccf0), (*watchersyncer.watcherCache)(0xc00051cd80), (*watchersyncer.watcherCache)(0xc00051ce10), (*watchersyncer.watcherCache)(0xc00051cea0), (*watchersyncer.watcherCache)(0xc00051cf30), (*watchersyncer.watcherCache)(0xc00051cfc0), (*watchersyncer.watcherCache)(0xc00051d050), (*watchersyncer.watcherCache)(0xc00051d0e0), (*watchersyncer.watcherCache)(0xc00051d170), (*watchersyncer.watcherCache)(0xc00051d200)}, results:(chan interface {})(0xc0003b6d80), numSynced:0, callbacks:(*calc.SyncerCallbacksDecoupler)(0xc0003ae670), wgwc:(*sync.WaitGroup)(nil), wgws:(*sync.WaitGroup)(nil), cancel:(context.CancelFunc)(nil)}
2023-05-12 02:01:17.434 [INFO][76] felix/daemon.go 508: Starting the datastore Syncer
2023-05-12 02:01:17.434 [INFO][76] felix/watchersyncer.go 89: Start called
2023-05-12 02:01:17.435 [INFO][76] felix/calc_graph.go 118: Creating calculation graph, filtered to hostname k8s-master01
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.WorkloadEndpointKey: (dispatcher.UpdateHandler)(0x198b760)
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.HostEndpointKey: (dispatcher.UpdateHandler)(0x198b760)
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.WorkloadEndpointKey: (dispatcher.UpdateHandler)(0x198b900)
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.HostEndpointKey: (dispatcher.UpdateHandler)(0x198b900)
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.WorkloadEndpointKey: (dispatcher.UpdateHandler)(0x198af20)
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.HostEndpointKey: (dispatcher.UpdateHandler)(0x198af20)
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.PolicyKey: (dispatcher.UpdateHandler)(0x198af20)
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.ProfileRulesKey: (dispatcher.UpdateHandler)(0x198af20)
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.ResourceKey: (dispatcher.UpdateHandler)(0x198af20)
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.ResourceKey: (dispatcher.UpdateHandler)(0x19436a0)
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.ResourceKey: (dispatcher.UpdateHandler)(0x17911a0)
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.WorkloadEndpointKey: (dispatcher.UpdateHandler)(0x17911a0)
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.HostEndpointKey: (dispatcher.UpdateHandler)(0x17911a0)
2023-05-12 02:01:17.435 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.NetworkSetKey: (dispatcher.UpdateHandler)(0x17911a0)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.PolicyKey: (dispatcher.UpdateHandler)(0x198c3e0)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.WorkloadEndpointKey: (dispatcher.UpdateHandler)(0x198c3e0)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.HostEndpointKey: (dispatcher.UpdateHandler)(0x198c3e0)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.HostIPKey: (dispatcher.UpdateHandler)(0x198bbe0)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.IPPoolKey: (dispatcher.UpdateHandler)(0x198bbe0)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.WireguardKey: (dispatcher.UpdateHandler)(0x198bbe0)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.ResourceKey: (dispatcher.UpdateHandler)(0x198bbe0)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.GlobalConfigKey: (dispatcher.UpdateHandler)(0x198ba40)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.HostConfigKey: (dispatcher.UpdateHandler)(0x198ba40)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.ReadyFlagKey: (dispatcher.UpdateHandler)(0x198ba40)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.ResourceKey: (dispatcher.UpdateHandler)(0x198b480)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.IPPoolKey: (dispatcher.UpdateHandler)(0x198b5c0)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.HostIPKey: (dispatcher.UpdateHandler)(0x198c880)
2023-05-12 02:01:17.436 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.WorkloadEndpointKey: (dispatcher.UpdateHandler)(0x198c880)
2023-05-12 02:01:17.437 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.HostEndpointKey: (dispatcher.UpdateHandler)(0x198c880)
2023-05-12 02:01:17.437 [INFO][76] felix/dispatcher.go 68: Registering listener for type model.HostConfigKey: (dispatcher.UpdateHandler)(0x198c880)
2023-05-12 02:01:17.437 [INFO][76] felix/async_calc_graph.go 255: Starting AsyncCalcGraph
2023-05-12 02:01:17.437 [INFO][76] felix/daemon.go 619: Started the processing graph
2023-05-12 02:01:17.429 [INFO][76] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"lo", Addrs:set.Typed[string]{"127.0.0.0":set.v{}, "127.0.0.1":set.v{}, "::1":set.v{}}}
2023-05-12 02:01:17.437 [INFO][76] felix/ipsets.go 130: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2023-05-12 02:01:17.437 [INFO][76] felix/int_dataplane.go 1713: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"tunl0", Addrs:set.Typed[string]{"172.25.244.192":set.v{}}}
2023-05-12 02:01:17.437 [INFO][76] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"tunl0", Addrs:set.Typed[string]{"172.25.244.192":set.v{}}}
2023-05-12 02:01:17.437 [INFO][76] felix/ipsets.go 130: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2023-05-12 02:01:17.437 [INFO][76] felix/int_dataplane.go 1713: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"enp0s8", Addrs:set.Typed[string]{"10.0.3.15":set.v{}, "fe80::a00:27ff:fefe:c60e":set.v{}}}
2023-05-12 02:01:17.438 [INFO][76] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"enp0s8", Addrs:set.Typed[string]{"10.0.3.15":set.v{}, "fe80::a00:27ff:fefe:c60e":set.v{}}}
2023-05-12 02:01:17.438 [INFO][76] felix/ipsets.go 130: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2023-05-12 02:01:17.438 [INFO][76] felix/int_dataplane.go 1713: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"enp0s17", Addrs:set.Typed[string]{"192.168.56.160":set.v{}, "192.168.56.61":set.v{}}}
2023-05-12 02:01:17.438 [INFO][76] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"enp0s17", Addrs:set.Typed[string]{"192.168.56.160":set.v{}, "192.168.56.61":set.v{}}}
2023-05-12 02:01:17.438 [INFO][76] felix/ipsets.go 130: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2023-05-12 02:01:17.438 [INFO][76] felix/watchersyncer.go 130: Sending status update Status=wait-for-ready
2023-05-12 02:01:17.438 [INFO][76] felix/watchersyncer.go 149: Starting main event processing loop
2023-05-12 02:01:17.438 [INFO][76] felix/watchercache.go 181: Full resync is required ListRoot="/calico/ipam/v2/assignment/"
2023-05-12 02:01:17.439 [INFO][76] felix/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/ippools"
2023-05-12 02:01:17.439 [INFO][76] felix/async_calc_graph.go 137: AsyncCalcGraph running
2023-05-12 02:01:17.439 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ConfigUpdate update from calculation graph msg=config:<key:"CalicoVersion" value:"v3.24.5" > config:<key:"ClusterGUID" value:"ec9173da4f474fd597fefc04d70fe35c" > config:<key:"ClusterType" value:"k8s,bgp" > config:<key:"DefaultEndpointToHostAction" value:"ACCEPT" > config:<key:"EtcdCaFile" value:"/calico-secrets/etcd-ca" > config:<key:"EtcdCertFile" value:"/calico-secrets/etcd-cert" > config:<key:"EtcdEndpoints" value:"https://192.168.56.61:2379" > config:<key:"EtcdKeyFile" value:"/calico-secrets/etcd-key" > config:<key:"FelixHostname" value:"k8s-master01" > config:<key:"FloatingIPs" value:"Disabled" > config:<key:"HealthEnabled" value:"true" > config:<key:"IpInIpMtu" value:"0" > config:<key:"IpInIpTunnelAddr" value:"172.25.244.192" > config:<key:"Ipv6Support" value:"false" > config:<key:"LogFilePath" value:"None" > config:<key:"LogSeverityFile" value:"None" > config:<key:"LogSeverityScreen" value:"Info" > config:<key:"LogSeveritySys" value:"None" > config:<key:"MetadataAddr" value:"None" > config:<key:"ReportingIntervalSecs" value:"0" > config:<key:"UsageReportingEnabled" value:"false" > config:<key:"VXLANMTU" value:"0" > config:<key:"WireguardMTU" value:"0" >
2023-05-12 02:01:17.439 [INFO][76] felix/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/nodes"
2023-05-12 02:01:17.439 [INFO][76] felix/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/profiles"
2023-05-12 02:01:17.439 [INFO][76] felix/daemon.go 979: Reading from dataplane driver pipe...
2023-05-12 02:01:17.439 [INFO][76] felix/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/clusterinformations"
2023-05-12 02:01:17.439 [INFO][76] felix/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/workloadendpoints"
2023-05-12 02:01:17.440 [INFO][76] felix/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/felixconfigurations"
2023-05-12 02:01:17.440 [INFO][76] felix/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/networkpolicies"
2023-05-12 02:01:17.440 [INFO][76] felix/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/globalnetworkpolicies"
2023-05-12 02:01:17.440 [INFO][76] felix/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/networksets"
2023-05-12 02:01:17.440 [INFO][76] felix/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/globalnetworksets"
2023-05-12 02:01:17.440 [INFO][76] felix/daemon.go 689: No driver process to monitor
2023-05-12 02:01:17.440 [INFO][76] felix/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/hostendpoints"
2023-05-12 02:01:17.440 [INFO][76] felix/watchercache.go 181: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/bgpconfigurations"
2023-05-12 02:01:17.451 [INFO][76] felix/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/bgpconfigurations"
2023-05-12 02:01:17.452 [INFO][76] felix/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/nodes"
2023-05-12 02:01:17.453 [INFO][76] felix/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/ippools"
2023-05-12 02:01:17.454 [INFO][76] felix/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/hostendpoints"
2023-05-12 02:01:17.454 [INFO][76] felix/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/workloadendpoints"
2023-05-12 02:01:17.454 [INFO][76] felix/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/clusterinformations"
2023-05-12 02:01:17.455 [INFO][76] felix/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/networksets"
2023-05-12 02:01:17.463 [INFO][76] felix/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/globalnetworksets"
2023-05-12 02:01:17.463 [INFO][76] felix/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/globalnetworkpolicies"
2023-05-12 02:01:17.464 [INFO][76] felix/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/networkpolicies"
2023-05-12 02:01:17.466 [INFO][76] felix/watchercache.go 294: Sending synced update ListRoot="/calico/ipam/v2/assignment/"
2023-05-12 02:01:17.466 [INFO][76] felix/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.466 [INFO][76] felix/watchersyncer.go 130: Sending status update Status=resync
2023-05-12 02:01:17.466 [INFO][76] felix/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.467 [INFO][76] felix/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.467 [INFO][76] felix/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.467 [INFO][76] felix/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.467 [INFO][76] felix/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.467 [INFO][76] felix/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.467 [INFO][76] felix/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.467 [INFO][76] felix/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.467 [INFO][76] felix/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.467 [INFO][76] felix/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.470 [INFO][76] felix/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/felixconfigurations"
2023-05-12 02:01:17.471 [INFO][76] felix/config_batcher.go 61: Host config update for this host: {{HostConfig(node=k8s-master01,name=IpInIpTunnelAddr) 172.25.244.192 483271 <nil> 0s} 1}
2023-05-12 02:01:17.471 [INFO][76] felix/config_batcher.go 74: Global config update: {{GlobalFelixConfig(name=ClusterGUID) ec9173da4f474fd597fefc04d70fe35c 189145 <nil> 0s} 1}
2023-05-12 02:01:17.471 [INFO][76] felix/config_batcher.go 74: Global config update: {{GlobalFelixConfig(name=ClusterType) k8s,bgp 189145 <nil> 0s} 1}
2023-05-12 02:01:17.471 [INFO][76] felix/config_batcher.go 74: Global config update: {{GlobalFelixConfig(name=CalicoVersion) v3.24.5 189145 <nil> 0s} 1}
2023-05-12 02:01:17.471 [INFO][76] felix/watchercache.go 294: Sending synced update ListRoot="/calico/resources/v3/projectcalico.org/profiles"
2023-05-12 02:01:17.471 [INFO][76] felix/config_batcher.go 74: Global config update: {{GlobalFelixConfig(name=LogSeverityScreen) Info 30902 <nil> 0s} 1}
2023-05-12 02:01:17.471 [INFO][76] felix/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.471 [INFO][76] felix/config_batcher.go 74: Global config update: {{GlobalFelixConfig(name=ReportingIntervalSecs) 0 30902 <nil> 0s} 1}
2023-05-12 02:01:17.471 [INFO][76] felix/watchersyncer.go 209: Received InSync event from one of the watcher caches
2023-05-12 02:01:17.471 [INFO][76] felix/config_batcher.go 74: Global config update: {{GlobalFelixConfig(name=FloatingIPs) Disabled 30902 <nil> 0s} 1}
2023-05-12 02:01:17.471 [INFO][76] felix/watchersyncer.go 221: All watchers have sync'd data - sending data and final sync
2023-05-12 02:01:17.471 [INFO][76] felix/watchersyncer.go 130: Sending status update Status=in-sync
2023-05-12 02:01:17.472 [INFO][76] felix/config_batcher.go 61: Host config update for this host: {{HostConfig(node=k8s-master01,name=DefaultEndpointToHostAction) Return 30945 <nil> 0s} 1}
2023-05-12 02:01:17.472 [INFO][76] felix/int_dataplane.go 1680: Received *proto.HostMetadataUpdate update from calculation graph msg=hostname:"k8s-master01" ipv4_addr:"192.168.56.61"
2023-05-12 02:01:17.472 [INFO][76] felix/config_batcher.go 61: Host config update for this host: {{HostConfig(node=k8s-master01,name=FloatingIPs) Disabled 30945 <nil> 0s} 1}
2023-05-12 02:01:17.472 [INFO][76] felix/int_dataplane.go 1680: Received *proto.HostMetadataUpdate update from calculation graph msg=hostname:"k8s-node01" ipv4_addr:"192.168.56.71"
2023-05-12 02:01:17.472 [INFO][76] felix/int_dataplane.go 1680: Received *proto.HostMetadataUpdate update from calculation graph msg=hostname:"k8s-node02" ipv4_addr:"192.168.56.72"
2023-05-12 02:01:17.472 [INFO][76] felix/int_dataplane.go 1680: Received *proto.IPAMPoolUpdate update from calculation graph msg=id:"172.16.0.0-12" pool:<cidr:"172.16.0.0/12" masquerade:true >
2023-05-12 02:01:17.475 [INFO][76] felix/config_batcher.go 102: Datamodel in sync, flushing config update
2023-05-12 02:01:17.475 [INFO][76] felix/config_batcher.go 112: Sending config update global: map[CalicoVersion:v3.24.5 ClusterGUID:ec9173da4f474fd597fefc04d70fe35c ClusterType:k8s,bgp FloatingIPs:Disabled LogSeverityScreen:Info ReportingIntervalSecs:0], host: map[DefaultEndpointToHostAction:Return FloatingIPs:Disabled IpInIpTunnelAddr:172.25.244.192].
2023-05-12 02:01:17.475 [INFO][76] felix/async_calc_graph.go 166: First time we've been in sync
2023-05-12 02:01:17.476 [INFO][76] felix/health.go 137: Health of component changed lastReport=health.HealthReport{Live:true, Ready:false, Detail:""} name="async_calc_graph" newReport=&health.HealthReport{Live:true, Ready:true, Detail:""}
2023-05-12 02:01:17.476 [INFO][76] felix/event_sequencer.go 259: Possible config update. global=map[string]string{"CalicoVersion":"v3.24.5", "ClusterGUID":"ec9173da4f474fd597fefc04d70fe35c", "ClusterType":"k8s,bgp", "FloatingIPs":"Disabled", "LogSeverityScreen":"Info", "ReportingIntervalSecs":"0"} host=map[string]string{"DefaultEndpointToHostAction":"Return", "FloatingIPs":"Disabled", "IpInIpTunnelAddr":"172.25.244.192"}
2023-05-12 02:01:17.476 [INFO][76] felix/config_params.go 435: Merging in config from datastore (global): map[CalicoVersion:v3.24.5 ClusterGUID:ec9173da4f474fd597fefc04d70fe35c ClusterType:k8s,bgp FloatingIPs:Disabled LogSeverityScreen:Info ReportingIntervalSecs:0]
2023-05-12 02:01:17.476 [INFO][76] felix/config_params.go 542: Parsing value for UsageReportingEnabled: false (from environment variable)
2023-05-12 02:01:17.477 [INFO][76] felix/config_params.go 578: Parsed value for UsageReportingEnabled: false (from environment variable)
2023-05-12 02:01:17.477 [INFO][76] felix/config_params.go 542: Parsing value for Ipv6Support: false (from environment variable)
2023-05-12 02:01:17.477 [INFO][76] felix/config_params.go 578: Parsed value for Ipv6Support: false (from environment variable)
2023-05-12 02:01:17.477 [INFO][76] felix/config_params.go 542: Parsing value for EtcdEndpoints: https://192.168.56.61:2379 (from environment variable)
2023-05-12 02:01:17.477 [INFO][76] felix/config_params.go 578: Parsed value for EtcdEndpoints: [https://192.168.56.61:2379/] (from environment variable)
2023-05-12 02:01:17.478 [INFO][76] felix/config_params.go 542: Parsing value for HealthEnabled: true (from environment variable)
2023-05-12 02:01:17.478 [INFO][76] felix/config_params.go 578: Parsed value for HealthEnabled: true (from environment variable)
2023-05-12 02:01:17.478 [INFO][76] felix/config_params.go 542: Parsing value for FelixHostname: k8s-master01 (from environment variable)
2023-05-12 02:01:17.478 [INFO][76] felix/config_params.go 578: Parsed value for FelixHostname: k8s-master01 (from environment variable)
2023-05-12 02:01:17.478 [INFO][76] felix/config_params.go 542: Parsing value for DefaultEndpointToHostAction: ACCEPT (from environment variable)
2023-05-12 02:01:17.479 [INFO][76] felix/config_params.go 578: Parsed value for DefaultEndpointToHostAction: ACCEPT (from environment variable)
2023-05-12 02:01:17.479 [INFO][76] felix/config_params.go 542: Parsing value for EtcdCertFile: /calico-secrets/etcd-cert (from environment variable)
2023-05-12 02:01:17.479 [INFO][76] felix/param_types.go 305: Looking for required file path="/calico-secrets/etcd-cert"
2023-05-12 02:01:17.479 [INFO][76] felix/config_params.go 578: Parsed value for EtcdCertFile: /calico-secrets/etcd-cert (from environment variable)
2023-05-12 02:01:17.479 [INFO][76] felix/config_params.go 542: Parsing value for EtcdCaFile: /calico-secrets/etcd-ca (from environment variable)
2023-05-12 02:01:17.479 [INFO][76] felix/param_types.go 305: Looking for required file path="/calico-secrets/etcd-ca"
2023-05-12 02:01:17.479 [INFO][76] felix/config_params.go 578: Parsed value for EtcdCaFile: /calico-secrets/etcd-ca (from environment variable)
2023-05-12 02:01:17.479 [INFO][76] felix/config_params.go 542: Parsing value for WireguardMTU: 0 (from environment variable)
2023-05-12 02:01:17.480 [INFO][76] felix/config_params.go 578: Parsed value for WireguardMTU: 0 (from environment variable)
2023-05-12 02:01:17.480 [INFO][76] felix/config_params.go 542: Parsing value for VXLANMTU: 0 (from environment variable)
2023-05-12 02:01:17.481 [INFO][76] felix/config_params.go 578: Parsed value for VXLANMTU: 0 (from environment variable)
2023-05-12 02:01:17.481 [INFO][76] felix/config_params.go 542: Parsing value for IpInIpMtu: 0 (from environment variable)
2023-05-12 02:01:17.481 [INFO][76] felix/config_params.go 578: Parsed value for IpInIpMtu: 0 (from environment variable)
2023-05-12 02:01:17.481 [INFO][76] felix/config_params.go 542: Parsing value for EtcdKeyFile: /calico-secrets/etcd-key (from environment variable)
2023-05-12 02:01:17.481 [INFO][76] felix/param_types.go 305: Looking for required file path="/calico-secrets/etcd-key"
2023-05-12 02:01:17.481 [INFO][76] felix/config_params.go 578: Parsed value for EtcdKeyFile: /calico-secrets/etcd-key (from environment variable)
2023-05-12 02:01:17.481 [INFO][76] felix/config_params.go 542: Parsing value for MetadataAddr: None (from config file)
2023-05-12 02:01:17.482 [INFO][76] felix/config_params.go 559: Value set to 'none', replacing with zero-value: "".
2023-05-12 02:01:17.482 [INFO][76] felix/config_params.go 578: Parsed value for MetadataAddr: (from config file)
2023-05-12 02:01:17.482 [INFO][76] felix/config_params.go 542: Parsing value for LogFilePath: None (from config file)
2023-05-12 02:01:17.482 [INFO][76] felix/config_params.go 559: Value set to 'none', replacing with zero-value: "".
2023-05-12 02:01:17.482 [INFO][76] felix/config_params.go 578: Parsed value for LogFilePath: (from config file)
2023-05-12 02:01:17.482 [INFO][76] felix/config_params.go 542: Parsing value for LogSeverityFile: None (from config file)
2023-05-12 02:01:17.482 [INFO][76] felix/config_params.go 559: Value set to 'none', replacing with zero-value: "".
2023-05-12 02:01:17.483 [INFO][76] felix/config_params.go 578: Parsed value for LogSeverityFile: (from config file)
2023-05-12 02:01:17.483 [INFO][76] felix/config_params.go 542: Parsing value for LogSeveritySys: None (from config file)
2023-05-12 02:01:17.483 [INFO][76] felix/config_params.go 559: Value set to 'none', replacing with zero-value: "".
2023-05-12 02:01:17.483 [INFO][76] felix/config_params.go 578: Parsed value for LogSeveritySys: (from config file)
2023-05-12 02:01:17.483 [INFO][76] felix/config_params.go 542: Parsing value for FloatingIPs: Disabled (from datastore (per-host))
2023-05-12 02:01:17.483 [INFO][76] felix/config_params.go 578: Parsed value for FloatingIPs: Disabled (from datastore (per-host))
2023-05-12 02:01:17.483 [INFO][76] felix/config_params.go 542: Parsing value for IpInIpTunnelAddr: 172.25.244.192 (from datastore (per-host))
2023-05-12 02:01:17.483 [INFO][76] felix/config_params.go 578: Parsed value for IpInIpTunnelAddr: 172.25.244.192 (from datastore (per-host))
2023-05-12 02:01:17.483 [INFO][76] felix/config_params.go 542: Parsing value for DefaultEndpointToHostAction: Return (from datastore (per-host))
2023-05-12 02:01:17.484 [INFO][76] felix/config_params.go 578: Parsed value for DefaultEndpointToHostAction: RETURN (from datastore (per-host))
2023-05-12 02:01:17.484 [INFO][76] felix/config_params.go 581: Skipping config value for DefaultEndpointToHostAction from datastore (per-host); already have a value from environment variable
2023-05-12 02:01:17.484 [INFO][76] felix/config_params.go 542: Parsing value for ClusterGUID: ec9173da4f474fd597fefc04d70fe35c (from datastore (global))
2023-05-12 02:01:17.484 [INFO][76] felix/config_params.go 578: Parsed value for ClusterGUID: ec9173da4f474fd597fefc04d70fe35c (from datastore (global))
2023-05-12 02:01:17.485 [INFO][76] felix/config_params.go 542: Parsing value for ClusterType: k8s,bgp (from datastore (global))
2023-05-12 02:01:17.485 [INFO][76] felix/config_params.go 578: Parsed value for ClusterType: k8s,bgp (from datastore (global))
2023-05-12 02:01:17.485 [INFO][76] felix/config_params.go 542: Parsing value for CalicoVersion: v3.24.5 (from datastore (global))
2023-05-12 02:01:17.485 [INFO][76] felix/config_params.go 578: Parsed value for CalicoVersion: v3.24.5 (from datastore (global))
2023-05-12 02:01:17.485 [INFO][76] felix/config_params.go 542: Parsing value for LogSeverityScreen: Info (from datastore (global))
2023-05-12 02:01:17.485 [INFO][76] felix/config_params.go 578: Parsed value for LogSeverityScreen: INFO (from datastore (global))
2023-05-12 02:01:17.485 [INFO][76] felix/config_params.go 542: Parsing value for ReportingIntervalSecs: 0 (from datastore (global))
2023-05-12 02:01:17.486 [INFO][76] felix/config_params.go 578: Parsed value for ReportingIntervalSecs: 0s (from datastore (global))
2023-05-12 02:01:17.486 [INFO][76] felix/config_params.go 542: Parsing value for FloatingIPs: Disabled (from datastore (global))
2023-05-12 02:01:17.486 [INFO][76] felix/config_params.go 578: Parsed value for FloatingIPs: Disabled (from datastore (global))
2023-05-12 02:01:17.486 [INFO][76] felix/config_params.go 581: Skipping config value for FloatingIPs from datastore (global); already have a value from datastore (per-host)
2023-05-12 02:01:17.487 [INFO][76] felix/config_params.go 435: Merging in config from datastore (per-host): map[DefaultEndpointToHostAction:Return FloatingIPs:Disabled IpInIpTunnelAddr:172.25.244.192]
2023-05-12 02:01:17.487 [INFO][76] felix/config_params.go 542: Parsing value for WireguardMTU: 0 (from environment variable)
2023-05-12 02:01:17.487 [INFO][76] felix/config_params.go 578: Parsed value for WireguardMTU: 0 (from environment variable)
2023-05-12 02:01:17.487 [INFO][76] felix/config_params.go 542: Parsing value for VXLANMTU: 0 (from environment variable)
2023-05-12 02:01:17.487 [INFO][76] felix/config_params.go 578: Parsed value for VXLANMTU: 0 (from environment variable)
2023-05-12 02:01:17.487 [INFO][76] felix/config_params.go 542: Parsing value for IpInIpMtu: 0 (from environment variable)
2023-05-12 02:01:17.487 [INFO][76] felix/config_params.go 578: Parsed value for IpInIpMtu: 0 (from environment variable)
2023-05-12 02:01:17.488 [INFO][76] felix/config_params.go 542: Parsing value for EtcdKeyFile: /calico-secrets/etcd-key (from environment variable)
2023-05-12 02:01:17.488 [INFO][76] felix/param_types.go 305: Looking for required file path="/calico-secrets/etcd-key"
2023-05-12 02:01:17.488 [INFO][76] felix/config_params.go 578: Parsed value for EtcdKeyFile: /calico-secrets/etcd-key (from environment variable)
2023-05-12 02:01:17.488 [INFO][76] felix/config_params.go 542: Parsing value for EtcdCaFile: /calico-secrets/etcd-ca (from environment variable)
2023-05-12 02:01:17.488 [INFO][76] felix/param_types.go 305: Looking for required file path="/calico-secrets/etcd-ca"
2023-05-12 02:01:17.488 [INFO][76] felix/config_params.go 578: Parsed value for EtcdCaFile: /calico-secrets/etcd-ca (from environment variable)
2023-05-12 02:01:17.488 [INFO][76] felix/config_params.go 542: Parsing value for HealthEnabled: true (from environment variable)
2023-05-12 02:01:17.488 [INFO][76] felix/config_params.go 578: Parsed value for HealthEnabled: true (from environment variable)
2023-05-12 02:01:17.489 [INFO][76] felix/config_params.go 542: Parsing value for FelixHostname: k8s-master01 (from environment variable)
2023-05-12 02:01:17.489 [INFO][76] felix/config_params.go 578: Parsed value for FelixHostname: k8s-master01 (from environment variable)
2023-05-12 02:01:17.489 [INFO][76] felix/config_params.go 542: Parsing value for DefaultEndpointToHostAction: ACCEPT (from environment variable)
2023-05-12 02:01:17.489 [INFO][76] felix/config_params.go 578: Parsed value for DefaultEndpointToHostAction: ACCEPT (from environment variable)
2023-05-12 02:01:17.489 [INFO][76] felix/config_params.go 542: Parsing value for EtcdCertFile: /calico-secrets/etcd-cert (from environment variable)
2023-05-12 02:01:17.489 [INFO][76] felix/param_types.go 305: Looking for required file path="/calico-secrets/etcd-cert"
2023-05-12 02:01:17.489 [INFO][76] felix/config_params.go 578: Parsed value for EtcdCertFile: /calico-secrets/etcd-cert (from environment variable)
2023-05-12 02:01:17.489 [INFO][76] felix/config_params.go 542: Parsing value for UsageReportingEnabled: false (from environment variable)
2023-05-12 02:01:17.489 [INFO][76] felix/config_params.go 578: Parsed value for UsageReportingEnabled: false (from environment variable)
2023-05-12 02:01:17.490 [INFO][76] felix/config_params.go 542: Parsing value for Ipv6Support: false (from environment variable)
2023-05-12 02:01:17.490 [INFO][76] felix/config_params.go 578: Parsed value for Ipv6Support: false (from environment variable)
2023-05-12 02:01:17.490 [INFO][76] felix/config_params.go 542: Parsing value for EtcdEndpoints: https://192.168.56.61:2379 (from environment variable)
2023-05-12 02:01:17.490 [INFO][76] felix/config_params.go 578: Parsed value for EtcdEndpoints: [https://192.168.56.61:2379/] (from environment variable)
2023-05-12 02:01:17.490 [INFO][76] felix/config_params.go 542: Parsing value for LogFilePath: None (from config file)
2023-05-12 02:01:17.491 [INFO][76] felix/config_params.go 559: Value set to 'none', replacing with zero-value: "".
2023-05-12 02:01:17.491 [INFO][76] felix/config_params.go 578: Parsed value for LogFilePath: (from config file)
2023-05-12 02:01:17.491 [INFO][76] felix/config_params.go 542: Parsing value for LogSeverityFile: None (from config file)
2023-05-12 02:01:17.491 [INFO][76] felix/config_params.go 559: Value set to 'none', replacing with zero-value: "".
2023-05-12 02:01:17.491 [INFO][76] felix/config_params.go 578: Parsed value for LogSeverityFile: (from config file)
2023-05-12 02:01:17.491 [INFO][76] felix/config_params.go 542: Parsing value for LogSeveritySys: None (from config file)
2023-05-12 02:01:17.491 [INFO][76] felix/config_params.go 559: Value set to 'none', replacing with zero-value: "".
2023-05-12 02:01:17.491 [INFO][76] felix/config_params.go 578: Parsed value for LogSeveritySys: (from config file)
2023-05-12 02:01:17.491 [INFO][76] felix/config_params.go 542: Parsing value for MetadataAddr: None (from config file)
2023-05-12 02:01:17.491 [INFO][76] felix/config_params.go 559: Value set to 'none', replacing with zero-value: "".
2023-05-12 02:01:17.491 [INFO][76] felix/config_params.go 578: Parsed value for MetadataAddr: (from config file)
2023-05-12 02:01:17.492 [INFO][76] felix/config_params.go 542: Parsing value for DefaultEndpointToHostAction: Return (from datastore (per-host))
2023-05-12 02:01:17.492 [INFO][76] felix/config_params.go 578: Parsed value for DefaultEndpointToHostAction: RETURN (from datastore (per-host))
2023-05-12 02:01:17.492 [INFO][76] felix/config_params.go 581: Skipping config value for DefaultEndpointToHostAction from datastore (per-host); already have a value from environment variable
2023-05-12 02:01:17.492 [INFO][76] felix/config_params.go 542: Parsing value for FloatingIPs: Disabled (from datastore (per-host))
2023-05-12 02:01:17.492 [INFO][76] felix/config_params.go 578: Parsed value for FloatingIPs: Disabled (from datastore (per-host))
2023-05-12 02:01:17.492 [INFO][76] felix/config_params.go 542: Parsing value for IpInIpTunnelAddr: 172.25.244.192 (from datastore (per-host))
2023-05-12 02:01:17.492 [INFO][76] felix/config_params.go 578: Parsed value for IpInIpTunnelAddr: 172.25.244.192 (from datastore (per-host))
2023-05-12 02:01:17.493 [INFO][76] felix/config_params.go 542: Parsing value for ClusterGUID: ec9173da4f474fd597fefc04d70fe35c (from datastore (global))
2023-05-12 02:01:17.493 [INFO][76] felix/config_params.go 578: Parsed value for ClusterGUID: ec9173da4f474fd597fefc04d70fe35c (from datastore (global))
2023-05-12 02:01:17.493 [INFO][76] felix/config_params.go 542: Parsing value for ClusterType: k8s,bgp (from datastore (global))
2023-05-12 02:01:17.493 [INFO][76] felix/config_params.go 578: Parsed value for ClusterType: k8s,bgp (from datastore (global))
2023-05-12 02:01:17.493 [INFO][76] felix/config_params.go 542: Parsing value for CalicoVersion: v3.24.5 (from datastore (global))
2023-05-12 02:01:17.493 [INFO][76] felix/config_params.go 578: Parsed value for CalicoVersion: v3.24.5 (from datastore (global))
2023-05-12 02:01:17.494 [INFO][76] felix/config_params.go 542: Parsing value for LogSeverityScreen: Info (from datastore (global))
2023-05-12 02:01:17.494 [INFO][76] felix/config_params.go 578: Parsed value for LogSeverityScreen: INFO (from datastore (global))
2023-05-12 02:01:17.494 [INFO][76] felix/config_params.go 542: Parsing value for ReportingIntervalSecs: 0 (from datastore (global))
2023-05-12 02:01:17.494 [INFO][76] felix/config_params.go 578: Parsed value for ReportingIntervalSecs: 0s (from datastore (global))
2023-05-12 02:01:17.494 [INFO][76] felix/config_params.go 542: Parsing value for FloatingIPs: Disabled (from datastore (global))
2023-05-12 02:01:17.494 [INFO][76] felix/config_params.go 578: Parsed value for FloatingIPs: Disabled (from datastore (global))
2023-05-12 02:01:17.494 [INFO][76] felix/config_params.go 581: Skipping config value for FloatingIPs from datastore (global); already have a value from datastore (per-host)
2023-05-12 02:01:17.494 [INFO][76] felix/async_calc_graph.go 220: First flush after becoming in sync, sending InSync message.
2023-05-12 02:01:17.494 [INFO][76] felix/daemon.go 1153: Datastore now in sync.
2023-05-12 02:01:17.495 [INFO][76] felix/daemon.go 1155: Datastore in sync for first time, sending message to status reporter.
2023-05-12 02:01:17.495 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"pv-protection-controller" > labels:<key:"projectcalico.org/name" value:"pv-protection-controller" >
2023-05-12 02:01:17.495 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-public" name:"default" > labels:<key:"projectcalico.org/name" value:"default" >
2023-05-12 02:01:17.496 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"generic-garbage-collector" > labels:<key:"projectcalico.org/name" value:"generic-garbage-collector" >
2023-05-12 02:01:17.496 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"clusterrole-aggregation-controller" > labels:<key:"projectcalico.org/name" value:"clusterrole-aggregation-controller" >
2023-05-12 02:01:17.496 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"deployment-controller" > labels:<key:"projectcalico.org/name" value:"deployment-controller" >
2023-05-12 02:01:17.496 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"token-cleaner" > labels:<key:"projectcalico.org/name" value:"token-cleaner" >
2023-05-12 02:01:17.496 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"default" name:"default" > labels:<key:"projectcalico.org/name" value:"default" >
2023-05-12 02:01:17.496 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"certificate-controller" > labels:<key:"projectcalico.org/name" value:"certificate-controller" >
2023-05-12 02:01:17.496 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"endpointslice-controller" > labels:<key:"projectcalico.org/name" value:"endpointslice-controller" >
2023-05-12 02:01:17.496 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"horizontal-pod-autoscaler" > labels:<key:"projectcalico.org/name" value:"horizontal-pod-autoscaler" >
2023-05-12 02:01:17.497 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"replication-controller" > labels:<key:"projectcalico.org/name" value:"replication-controller" >
2023-05-12 02:01:17.497 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"calico-kube-controllers" > labels:<key:"projectcalico.org/name" value:"calico-kube-controllers" >
2023-05-12 02:01:17.497 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"calico-node" > labels:<key:"projectcalico.org/name" value:"calico-node" >
2023-05-12 02:01:17.497 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"endpoint-controller" > labels:<key:"projectcalico.org/name" value:"endpoint-controller" >
2023-05-12 02:01:17.497 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"ttl-controller" > labels:<key:"projectcalico.org/name" value:"ttl-controller" >
2023-05-12 02:01:17.497 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"bootstrap-signer" > labels:<key:"projectcalico.org/name" value:"bootstrap-signer" >
2023-05-12 02:01:17.498 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"cronjob-controller" > labels:<key:"projectcalico.org/name" value:"cronjob-controller" >
2023-05-12 02:01:17.498 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"job-controller" > labels:<key:"projectcalico.org/name" value:"job-controller" >
2023-05-12 02:01:17.498 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"root-ca-cert-publisher" > labels:<key:"projectcalico.org/name" value:"root-ca-cert-publisher" >
2023-05-12 02:01:17.498 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"disruption-controller" > labels:<key:"projectcalico.org/name" value:"disruption-controller" >
2023-05-12 02:01:17.498 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"endpointslicemirroring-controller" > labels:<key:"projectcalico.org/name" value:"endpointslicemirroring-controller" >
2023-05-12 02:01:17.498 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"ephemeral-volume-controller" > labels:<key:"projectcalico.org/name" value:"ephemeral-volume-controller" >
2023-05-12 02:01:17.499 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"expand-controller" > labels:<key:"projectcalico.org/name" value:"expand-controller" >
2023-05-12 02:01:17.499 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"namespace-controller" > labels:<key:"projectcalico.org/name" value:"namespace-controller" >
2023-05-12 02:01:17.499 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"node-controller" > labels:<key:"projectcalico.org/name" value:"node-controller" >
2023-05-12 02:01:17.499 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-node-lease" name:"default" > labels:<key:"projectcalico.org/name" value:"default" >
2023-05-12 02:01:17.499 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"default" > labels:<key:"projectcalico.org/name" value:"default" >
2023-05-12 02:01:17.499 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"service-controller" > labels:<key:"projectcalico.org/name" value:"service-controller" >
2023-05-12 02:01:17.499 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"resourcequota-controller" > labels:<key:"projectcalico.org/name" value:"resourcequota-controller" >
2023-05-12 02:01:17.499 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"service-account-controller" > labels:<key:"projectcalico.org/name" value:"service-account-controller" >
2023-05-12 02:01:17.500 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"pod-garbage-collector" > labels:<key:"projectcalico.org/name" value:"pod-garbage-collector" >
2023-05-12 02:01:17.500 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"pvc-protection-controller" > labels:<key:"projectcalico.org/name" value:"pvc-protection-controller" >
2023-05-12 02:01:17.500 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"replicaset-controller" > labels:<key:"projectcalico.org/name" value:"replicaset-controller" >
2023-05-12 02:01:17.500 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"attachdetach-controller" > labels:<key:"projectcalico.org/name" value:"attachdetach-controller" >
2023-05-12 02:01:17.500 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"daemon-set-controller" > labels:<key:"projectcalico.org/name" value:"daemon-set-controller" >
2023-05-12 02:01:17.500 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"ttl-after-finished-controller" > labels:<key:"projectcalico.org/name" value:"ttl-after-finished-controller" >
2023-05-12 02:01:17.501 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"persistent-volume-binder" > labels:<key:"projectcalico.org/name" value:"persistent-volume-binder" >
2023-05-12 02:01:17.501 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"statefulset-controller" > labels:<key:"projectcalico.org/name" value:"statefulset-controller" >
2023-05-12 02:01:17.501 [INFO][76] felix/int_dataplane.go 1680: Received *proto.NamespaceUpdate update from calculation graph msg=id:<name:"default" > labels:<key:"kubernetes.io/metadata.name" value:"default" > labels:<key:"projectcalico.org/name" value:"default" >
2023-05-12 02:01:17.501 [INFO][76] felix/int_dataplane.go 1680: Received *proto.NamespaceUpdate update from calculation graph msg=id:<name:"kube-node-lease" > labels:<key:"kubernetes.io/metadata.name" value:"kube-node-lease" > labels:<key:"projectcalico.org/name" value:"kube-node-lease" >
2023-05-12 02:01:17.501 [INFO][76] felix/int_dataplane.go 1680: Received *proto.NamespaceUpdate update from calculation graph msg=id:<name:"kube-public" > labels:<key:"kubernetes.io/metadata.name" value:"kube-public" > labels:<key:"projectcalico.org/name" value:"kube-public" >
2023-05-12 02:01:17.501 [INFO][76] felix/int_dataplane.go 1680: Received *proto.NamespaceUpdate update from calculation graph msg=id:<name:"kube-system" > labels:<key:"kubernetes.io/metadata.name" value:"kube-system" > labels:<key:"projectcalico.org/name" value:"kube-system" >
2023-05-12 02:01:17.502 [INFO][76] felix/int_dataplane.go 1680: Received *proto.Encapsulation update from calculation graph msg=ipip_enabled:true
2023-05-12 02:01:17.502 [INFO][76] felix/int_dataplane.go 1680: Received *proto.InSync update from calculation graph msg=
2023-05-12 02:01:17.502 [INFO][76] felix/int_dataplane.go 1688: Datastore in sync, flushing the dataplane for the first time... timeSinceStart=278.408115ms
2023-05-12 02:01:17.502 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-from-wl-dispatch" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.502 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-to-wl-dispatch" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.502 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-from-host-endpoint" ipVersion=0x4 table="raw"
2023-05-12 02:01:17.502 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-to-host-endpoint" ipVersion=0x4 table="raw"
2023-05-12 02:01:17.502 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-from-host-endpoint" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.503 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-to-host-endpoint" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.503 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-from-hep-forward" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.503 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-to-hep-forward" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.503 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-from-host-endpoint" ipVersion=0x4 table="mangle"
2023-05-12 02:01:17.503 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-to-host-endpoint" ipVersion=0x4 table="mangle"
2023-05-12 02:01:17.503 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-rpf-skip" ipVersion=0x4 table="raw"
2023-05-12 02:01:17.503 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-set-endpoint-mark" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.504 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-from-endpoint-mark" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.504 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-fip-dnat" ipVersion=0x4 table="nat"
2023-05-12 02:01:17.504 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-fip-snat" ipVersion=0x4 table="nat"
2023-05-12 02:01:17.504 [INFO][76] felix/masq_mgr.go 145: IPAM pools updated, refreshing iptables rule ipVersion=0x4
2023-05-12 02:01:17.504 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-nat-outgoing" ipVersion=0x4 table="nat"
2023-05-12 02:01:17.504 [INFO][76] felix/ipip_mgr.go 221: All-hosts IP set out-of sync, refreshing it.
2023-05-12 02:01:17.505 [INFO][76] felix/ipsets.go 130: Queueing IP set for creation family="inet" setID="all-hosts-net" setType="hash:net"
2023-05-12 02:01:17.505 [INFO][76] felix/table.go 508: Queueing update of chain. chainName="cali-cidr-block" ipVersion=0x4 table="filter"
2023-05-12 02:01:17.593 [INFO][76] felix/wireguard.go 1701: Trying to connect to linkClient ipVersion=0x4
2023-05-12 02:01:17.594 [INFO][76] felix/route_rule.go 189: Trying to connect to netlink
2023-05-12 02:01:17.594 [INFO][76] felix/wireguard.go 632: Public key out of sync or updated ipVersion=0x4 ourPublicKey=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
2023-05-12 02:01:17.599 [INFO][76] felix/ipsets.go 779: Doing full IP set rewrite family="inet" numMembersInPendingReplace=1 setID="all-ipam-pools"
2023-05-12 02:01:17.599 [INFO][76] felix/ipsets.go 779: Doing full IP set rewrite family="inet" numMembersInPendingReplace=1 setID="masq-ipam-pools"
2023-05-12 02:01:17.599 [INFO][76] felix/ipsets.go 779: Doing full IP set rewrite family="inet" numMembersInPendingReplace=6 setID="this-host"
2023-05-12 02:01:17.599 [INFO][76] felix/ipsets.go 779: Doing full IP set rewrite family="inet" numMembersInPendingReplace=3 setID="all-hosts-net"
2023-05-12 02:01:17.640 [INFO][76] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=21 ifaceName="calico_tmp_A" state="down"
2023-05-12 02:01:17.640 [INFO][76] felix/int_dataplane.go 1277: Linux interface addrs changed. addrs=set.Set{} ifaceName="calico_tmp_A"
2023-05-12 02:01:17.640 [INFO][76] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=20 ifaceName="calico_tmp_B" state="down"
2023-05-12 02:01:17.641 [INFO][76] felix/int_dataplane.go 1277: Linux interface addrs changed. addrs=set.Set{} ifaceName="calico_tmp_B"
2023-05-12 02:01:17.645 [INFO][76] felix/int_dataplane.go 1828: Completed first update to dataplane. secsSinceStart=0.421887356
2023-05-12 02:01:17.652 [INFO][76] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=21 ifaceName="calico_tmp_A" state=""
2023-05-12 02:01:17.652 [INFO][76] felix/int_dataplane.go 1277: Linux interface addrs changed. addrs=<nil> ifaceName="calico_tmp_A"
2023-05-12 02:01:17.652 [INFO][76] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=20 ifaceName="calico_tmp_B" state=""
2023-05-12 02:01:17.652 [INFO][76] felix/int_dataplane.go 1277: Linux interface addrs changed. addrs=<nil> ifaceName="calico_tmp_B"
2023-05-12 02:01:17.653 [INFO][76] felix/health.go 137: Health of component changed lastReport=health.HealthReport{Live:true, Ready:false, Detail:""} name="int_dataplane" newReport=&health.HealthReport{Live:true, Ready:true, Detail:""}
2023-05-12 02:01:17.653 [INFO][76] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"calico_tmp_A", State:"down", Index:21}
2023-05-12 02:01:17.653 [INFO][76] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"calico_tmp_B", State:"down", Index:20}
2023-05-12 02:01:17.653 [INFO][76] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"calico_tmp_A", State:"", Index:21}
2023-05-12 02:01:17.653 [INFO][76] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"calico_tmp_B", State:"", Index:20}
2023-05-12 02:01:17.653 [INFO][76] felix/int_dataplane.go 1838: Dataplane updates throttled
2023-05-12 02:01:17.654 [INFO][76] felix/int_dataplane.go 1713: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_A", Addrs:set.Typed[string]{}}
2023-05-12 02:01:17.654 [INFO][76] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_A", Addrs:set.Typed[string]{}}
2023-05-12 02:01:17.654 [INFO][76] felix/int_dataplane.go 1713: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_B", Addrs:set.Typed[string]{}}
2023-05-12 02:01:17.654 [INFO][76] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_B", Addrs:set.Typed[string]{}}
2023-05-12 02:01:17.654 [INFO][76] felix/int_dataplane.go 1713: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_A", Addrs:set.Set[string](nil)}
2023-05-12 02:01:17.654 [INFO][76] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_A", Addrs:set.Set[string](nil)}
2023-05-12 02:01:17.654 [INFO][76] felix/int_dataplane.go 1713: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_B", Addrs:set.Set[string](nil)}
2023-05-12 02:01:17.654 [INFO][76] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_B", Addrs:set.Set[string](nil)}
2023-05-12 02:01:18.641 [INFO][76] felix/int_dataplane.go 1805: Dataplane updates no longer throttled
bird: device1: Initializing
bird: direct1: Initializing
bird: device1: Starting
bird: device1: Connected to table master
bird: device1: State changed to feed
bird: direct1: Starting
bird: direct1: Connected to table master
bird: direct1: State changed to feed
bird: Graceful restart started
bird: Graceful restart done
bird: Started
bird: device1: State changed to up
bird: direct1: State changed to up
bird: device1: Initializing
bird: direct1: Initializing
bird: Mesh_192_168_56_71: Initializing
bird: Mesh_192_168_56_72: Initializing
bird: device1: Starting
bird: device1: Connected to table master
bird: device1: State changed to feed
bird: direct1: Starting
bird: direct1: Connected to table master
bird: direct1: State changed to feed
bird: Mesh_192_168_56_71: Starting
bird: Mesh_192_168_56_71: State changed to start
bird: Mesh_192_168_56_72: Starting
bird: Mesh_192_168_56_72: State changed to start
bird: Graceful restart started
bird: Started
bird: device1: State changed to up
bird: direct1: State changed to up
bird: Mesh_192_168_56_71: Connected to table master
bird: Mesh_192_168_56_71: State changed to feed
bird: Mesh_192_168_56_71: State changed to up
bird: Mesh_192_168_56_72: Connected to table master
bird: Mesh_192_168_56_72: State changed to feed
bird: Mesh_192_168_56_72: State changed to up
bird: Graceful restart done
2023-05-12 02:01:26.360 [INFO][76] felix/health.go 242: Overall health status changed newStatus=&health.HealthReport{Live:true, Ready:true, Detail:"+------------------+---------+----------------+-----------------+--------+\n| COMPONENT | TIMEOUT | LIVENESS | READINESS | DETAIL |\n+------------------+---------+----------------+-----------------+--------+\n| async_calc_graph | 20s | reporting live | reporting ready | |\n| felix-startup | 0s | reporting live | reporting ready | |\n| int_dataplane | 1m30s | reporting live | reporting ready | |\n+------------------+---------+----------------+-----------------+--------+"}
2023-05-12 02:02:17.269 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:02:20.015 [INFO][76] felix/summary.go 100: Summarising 18 dataplane reconciliation loops over 1m2.7s: avg=23ms longest=143ms (resync-filter-v4,resync-ipsets-v4,resync-mangle-v4,resync-nat-v4,resync-raw-v4,resync-routes-v4,resync-routes-v4,resync-rules-v4,update-filter-v4,update-ipsets-4,update-mangle-v4,update-nat-v4,update-raw-v4)
2023-05-12 02:03:17.272 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:03:23.899 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m3.9s: avg=7ms longest=18ms (resync-filter-v4,resync-mangle-v4)
2023-05-12 02:04:17.273 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:04:25.502 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m1.6s: avg=16ms longest=29ms (resync-nat-v4,resync-raw-v4)
2023-05-12 02:04:35.687 [INFO][76] felix/int_dataplane.go 1680: Received *proto.ServiceAccountUpdate update from calculation graph msg=id:<namespace:"kube-system" name:"coredns" > labels:<key:"addonmanager.kubernetes.io/mode" value:"Reconcile" > labels:<key:"kubernetes.io/cluster-service" value:"true" > labels:<key:"projectcalico.org/name" value:"coredns" >
2023-05-12 02:05:17.286 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:05:29.045 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m3.5s: avg=15ms longest=48ms (resync-filter-v4,resync-mangle-v4,resync-nat-v4)
2023-05-12 02:06:17.288 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:06:32.482 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m3.4s: avg=7ms longest=14ms (resync-filter-v4,resync-mangle-v4)
2023-05-12 02:07:17.289 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:07:36.115 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m3.6s: avg=8ms longest=22ms ()
2023-05-12 02:08:17.290 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:08:38.800 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m2.7s: avg=8ms longest=22ms (resync-ipsets-v4)
2023-05-12 02:09:17.292 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:09:42.027 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m3.2s: avg=7ms longest=18ms (resync-ipsets-v4)
2023-05-12 02:10:17.293 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:10:45.277 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m3.3s: avg=12ms longest=20ms (resync-ipsets-v4)
2023-05-12 02:11:17.307 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:11:47.904 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m2.6s: avg=9ms longest=21ms (resync-ipsets-v4)
2023-05-12 02:12:17.309 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:12:49.610 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m1.7s: avg=6ms longest=11ms ()
2023-05-12 02:13:17.312 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:13:49.641 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m0s: avg=10ms longest=29ms (resync-ipsets-v4)
2023-05-12 02:14:17.312 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:14:53.580 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m3.9s: avg=12ms longest=26ms (resync-ipsets-v4)
2023-05-12 02:15:17.324 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:15:57.258 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m3.7s: avg=7ms longest=14ms (resync-ipsets-v4)
2023-05-12 02:16:17.325 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:16:59.774 [INFO][76] felix/summary.go 100: Summarising 7 dataplane reconciliation loops over 1m2.5s: avg=16ms longest=43ms ()
2023-05-12 02:17:17.339 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:18:01.445 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m1.7s: avg=16ms longest=50ms (resync-ipsets-v4)
2023-05-12 02:18:17.339 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:19:04.154 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m2.7s: avg=9ms longest=21ms (resync-ipsets-v4)
2023-05-12 02:19:17.351 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:20:05.778 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m1.6s: avg=8ms longest=13ms (resync-filter-v4,resync-mangle-v4)
2023-05-12 02:20:17.352 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:21:07.528 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m1.8s: avg=8ms longest=19ms (resync-ipsets-v4)
2023-05-12 02:21:17.366 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:22:09.025 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m1.5s: avg=7ms longest=13ms (resync-filter-v4,resync-mangle-v4)
2023-05-12 02:22:17.366 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:23:12.387 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m3.4s: avg=7ms longest=13ms (resync-filter-v4,resync-mangle-v4)
2023-05-12 02:23:17.368 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:24:15.963 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m3.6s: avg=8ms longest=19ms (resync-ipsets-v4)
2023-05-12 02:24:17.372 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:25:17.375 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:25:18.785 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m2.8s: avg=15ms longest=47ms ()
2023-05-12 02:26:17.376 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:26:22.322 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m3.5s: avg=19ms longest=58ms (resync-filter-v4,resync-mangle-v4,resync-nat-v4)
2023-05-12 02:27:17.391 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:27:26.644 [INFO][76] felix/summary.go 100: Summarising 12 dataplane reconciliation loops over 1m4.3s: avg=10ms longest=24ms (resync-ipsets-v4)
2023-05-12 02:28:17.392 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:28:28.779 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m2.1s: avg=6ms longest=11ms ()
2023-05-12 02:29:17.404 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:29:32.725 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m3.9s: avg=10ms longest=23ms (resync-ipsets-v4)
2023-05-12 02:30:17.405 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:30:36.683 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m4s: avg=11ms longest=25ms (resync-mangle-v4)
2023-05-12 02:31:17.407 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:31:38.300 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m1.6s: avg=10ms longest=21ms (resync-ipsets-v4)
2023-05-12 02:31:55.609 [INFO][76] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=5 ifaceName="kube-ipvs0" state="up"
2023-05-12 02:31:55.610 [INFO][76] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"kube-ipvs0", State:"up", Index:5}
2023-05-12 02:32:17.408 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:32:41.835 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m3.5s: avg=8ms longest=19ms (resync-filter-v4,resync-nat-v4)
2023-05-12 02:33:17.420 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:33:46.353 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m4.5s: avg=10ms longest=28ms (resync-filter-v4,resync-nat-v4)
2023-05-12 02:34:17.423 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:34:49.953 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m3.6s: avg=7ms longest=17ms ()
2023-05-12 02:35:17.424 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:35:52.040 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m2.1s: avg=12ms longest=29ms (resync-ipsets-v4)
2023-05-12 02:36:17.425 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:36:55.654 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m3.6s: avg=12ms longest=25ms (resync-ipsets-v4)
2023-05-12 02:37:17.441 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:38:00.583 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m4.9s: avg=8ms longest=19ms ()
2023-05-12 02:38:17.443 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:39:02.379 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m1.8s: avg=8ms longest=11ms (resync-ipsets-v4)
2023-05-12 02:39:17.444 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:40:06.481 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m4.1s: avg=11ms longest=37ms (resync-ipsets-v4)
2023-05-12 02:40:17.446 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:41:10.074 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m3.6s: avg=7ms longest=17ms (resync-ipsets-v4)
2023-05-12 02:41:17.458 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:42:12.909 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m2.8s: avg=10ms longest=42ms ()
2023-05-12 02:42:17.459 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:43:15.981 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m3.1s: avg=9ms longest=20ms (resync-ipsets-v4)
2023-05-12 02:43:17.472 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:44:17.472 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:44:18.754 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m2.8s: avg=9ms longest=22ms ()
2023-05-12 02:45:17.483 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:45:21.396 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m2.6s: avg=7ms longest=15ms (resync-mangle-v4,resync-nat-v4)
2023-05-12 02:46:17.484 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:46:26.020 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m4.6s: avg=8ms longest=13ms (resync-mangle-v4,resync-nat-v4)
2023-05-12 02:47:17.485 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:47:28.593 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m2.6s: avg=10ms longest=31ms ()
2023-05-12 02:48:17.486 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:48:32.127 [INFO][76] felix/summary.go 100: Summarising 7 dataplane reconciliation loops over 1m3.5s: avg=5ms longest=6ms (resync-ipsets-v4)
2023-05-12 02:49:17.489 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:49:35.108 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m3s: avg=9ms longest=20ms ()
2023-05-12 02:50:17.490 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:50:37.976 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m2.9s: avg=8ms longest=13ms (resync-ipsets-v4)
2023-05-12 02:51:17.502 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:51:41.067 [INFO][76] felix/summary.go 100: Summarising 7 dataplane reconciliation loops over 1m3.1s: avg=5ms longest=6ms (resync-ipsets-v4)
2023-05-12 02:52:17.502 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:52:44.399 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m3.3s: avg=9ms longest=23ms (resync-mangle-v4,resync-nat-v4)
2023-05-12 02:53:17.505 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:53:48.322 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m3.9s: avg=8ms longest=14ms (resync-ipsets-v4)
2023-05-12 02:54:17.507 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:54:50.768 [INFO][76] felix/summary.go 100: Summarising 6 dataplane reconciliation loops over 1m2.4s: avg=6ms longest=9ms (resync-ipsets-v4)
2023-05-12 02:55:17.508 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:55:53.034 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m2.3s: avg=7ms longest=11ms (resync-mangle-v4,resync-nat-v4)
2023-05-12 02:56:17.511 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:56:55.631 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m2.6s: avg=7ms longest=12ms (resync-filter-v4)
2023-05-12 02:57:17.516 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:57:55.720 [INFO][76] felix/summary.go 100: Summarising 6 dataplane reconciliation loops over 1m0.1s: avg=12ms longest=26ms (resync-ipsets-v4)
2023-05-12 02:58:17.521 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 02:59:02.313 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m6.6s: avg=8ms longest=21ms (resync-ipsets-v4)
2023-05-12 02:59:17.534 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:00:06.020 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m3.7s: avg=9ms longest=19ms (resync-ipsets-v4)
2023-05-12 03:00:17.535 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:01:11.016 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m5s: avg=12ms longest=17ms (resync-ipsets-v4)
2023-05-12 03:01:17.537 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:02:13.782 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m2.8s: avg=14ms longest=34ms (resync-ipsets-v4)
2023-05-12 03:02:17.542 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:03:16.996 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m3.2s: avg=22ms longest=52ms (resync-mangle-v4,resync-nat-v4)
2023-05-12 03:03:17.553 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:04:17.553 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:04:19.794 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m2.8s: avg=11ms longest=23ms (resync-ipsets-v4)
2023-05-12 03:05:17.571 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:05:21.333 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m1.5s: avg=11ms longest=30ms (resync-ipsets-v4)
2023-05-12 03:06:17.571 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:06:24.486 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m3.2s: avg=14ms longest=44ms (resync-filter-v4)
2023-05-12 03:07:17.584 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:07:26.109 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m1.6s: avg=9ms longest=24ms (resync-ipsets-v4)
2023-05-12 03:08:17.586 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:08:28.536 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m2.4s: avg=7ms longest=13ms ()
2023-05-12 03:09:17.599 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:09:29.923 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m1.4s: avg=8ms longest=16ms (resync-ipsets-v4)
2023-05-12 03:10:17.602 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:10:32.990 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m3.1s: avg=8ms longest=12ms (resync-filter-v4)
2023-05-12 03:11:17.606 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:11:36.226 [INFO][76] felix/summary.go 100: Summarising 12 dataplane reconciliation loops over 1m3.2s: avg=9ms longest=21ms (resync-ipsets-v4)
2023-05-12 03:12:17.607 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:12:39.410 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m3.2s: avg=13ms longest=43ms ()
2023-05-12 03:13:17.609 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:13:41.686 [INFO][76] felix/summary.go 100: Summarising 10 dataplane reconciliation loops over 1m2.3s: avg=7ms longest=10ms (resync-filter-v4)
2023-05-12 03:14:17.610 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:14:45.686 [INFO][76] felix/summary.go 100: Summarising 12 dataplane reconciliation loops over 1m4s: avg=10ms longest=18ms (resync-ipsets-v4)
2023-05-12 03:15:17.612 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:15:48.755 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m3.1s: avg=13ms longest=21ms (resync-ipsets-v4)
2023-05-12 03:16:17.612 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:16:50.710 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m2s: avg=9ms longest=19ms (resync-ipsets-v4)
2023-05-12 03:17:17.616 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:17:53.981 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m3.3s: avg=17ms longest=43ms (resync-filter-v4)
2023-05-12 03:18:17.617 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:18:55.447 [INFO][76] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m1.5s: avg=10ms longest=27ms (resync-ipsets-v4)
2023-05-12 03:19:17.631 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:19:57.829 [INFO][76] felix/summary.go 100: Summarising 9 dataplane reconciliation loops over 1m2.4s: avg=8ms longest=24ms (resync-filter-v4)
2023-05-12 03:20:17.633 [INFO][83] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 192.168.56.61/24 on matching interface enp0s17
2023-05-12 03:21:00.527 [INFO][76] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m2.7s: avg=8ms longest=17ms (resync-ipsets-v4)
May 12 11:18:37 k8s-master01 kubelet[39085]: I0512 11:18:37.276291 39085 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/coredns-5db795bd57-z4257"
May 12 11:18:48 k8s-master01 kubelet[39085]: E0512 11:18:48.474993 39085 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="b25b1fe9f0eae615534102c4a2d370897be12abe05d26408e6baed4556c3d1f1"
May 12 11:18:48 k8s-master01 kubelet[39085]: E0512 11:18:48.475066 39085 kuberuntime_gc.go:177] "Failed to stop sandbox before removing" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" sandboxID="b25b1fe9f0eae615534102c4a2d370897be12abe05d26408e6baed4556c3d1f1"
May 12 11:19:08 k8s-master01 kubelet[39085]: I0512 11:19:08.296763 39085 logs.go:323] "Finished parsing log file" path="/var/log/pods/kube-system_calico-kube-controllers-6c7b6bf67-chfsq_13c76181-1240-441c-8389-cbaf380a2819/calico-kube-controllers/0.log"
May 12 11:20:37 k8s-master01 kubelet[39085]: E0512 11:20:37.277034 39085 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="83b3a3ab7adca0ae277c69ce510a56dad94c5681e0286ba3a1ad37311491cc48"
May 12 11:20:37 k8s-master01 kubelet[39085]: E0512 11:20:37.277145 39085 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:83b3a3ab7adca0ae277c69ce510a56dad94c5681e0286ba3a1ad37311491cc48}
May 12 11:20:37 k8s-master01 kubelet[39085]: E0512 11:20:37.277184 39085 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a9e2ddb-1aeb-48f2-8a9c-0cb5ade79301\" with KillPodSandboxError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\""
May 12 11:20:37 k8s-master01 kubelet[39085]: E0512 11:20:37.277215 39085 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a9e2ddb-1aeb-48f2-8a9c-0cb5ade79301\" with KillPodSandboxError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-5db795bd57-z4257" podUID=8a9e2ddb-1aeb-48f2-8a9c-0cb5ade79301
May 12 11:12:36 k8s-master01 containerd[765]: time="2023-05-12T11:12:36.118290033+08:00" level=info msg="StopPodSandbox for \"83b3a3ab7adca0ae277c69ce510a56dad94c5681e0286ba3a1ad37311491cc48\""
May 12 11:12:48 k8s-master01 containerd[765]: time="2023-05-12T11:12:48.470193895+08:00" level=info msg="StopPodSandbox for \"e5a57a198c8db6da14e9bd54743ba860a5cd8098f54b7f8ff3dea449dfd33656\""
May 12 11:12:48 k8s-master01 containerd[765]: time="2023-05-12T11:12:48.470596298+08:00" level=error msg="StopPodSandbox for \"71cc6ed31be3323b4bd9de431e8efbc121ed1bc30c4dae3d27e8cb99813815ed\" failed" error="failed to destroy network for sandbox \"71cc6ed31be3323b4bd9de431e8efbc121ed1bc30c4dae3d27e8cb99813815ed\": plugin type=\"calico\" failed (delete): netplugin failed with no error message: signal: killed"
May 12 11:14:36 k8s-master01 containerd[765]: time="2023-05-12T11:14:36.121028802+08:00" level=error msg="StopPodSandbox for \"83b3a3ab7adca0ae277c69ce510a56dad94c5681e0286ba3a1ad37311491cc48\" failed" error="failed to destroy network for sandbox \"83b3a3ab7adca0ae277c69ce510a56dad94c5681e0286ba3a1ad37311491cc48\": plugin type=\"calico\" failed (delete): netplugin failed with no error message: signal: killed"
May 12 11:14:36 k8s-master01 containerd[765]: time="2023-05-12T11:14:36.456484144+08:00" level=info msg="StopPodSandbox for \"83b3a3ab7adca0ae277c69ce510a56dad94c5681e0286ba3a1ad37311491cc48\""
May 12 11:14:48 k8s-master01 containerd[765]: time="2023-05-12T11:14:48.471362878+08:00" level=info msg="StopPodSandbox for \"1977065b951e106cb8f86a5a3f30d11dd2bd3993d8be4056ed80f7d480705db1\""
May 12 11:14:48 k8s-master01 containerd[765]: time="2023-05-12T11:14:48.471545534+08:00" level=error msg="StopPodSandbox for \"e5a57a198c8db6da14e9bd54743ba860a5cd8098f54b7f8ff3dea449dfd33656\" failed" error="failed to destroy network for sandbox \"e5a57a198c8db6da14e9bd54743ba860a5cd8098f54b7f8ff3dea449dfd33656\": plugin type=\"calico\" failed (delete): netplugin failed with no error message: signal: killed"
May 12 11:16:36 k8s-master01 containerd[765]: time="2023-05-12T11:16:36.458864058+08:00" level=error msg="StopPodSandbox for \"83b3a3ab7adca0ae277c69ce510a56dad94c5681e0286ba3a1ad37311491cc48\" failed" error="failed to destroy network for sandbox \"83b3a3ab7adca0ae277c69ce510a56dad94c5681e0286ba3a1ad37311491cc48\": plugin type=\"calico\" failed (delete): netplugin failed with no error message: signal: killed"
Maybe it's not a bug, but I don't know how to trace the root cause.
Can anyone take a look at this issue?
The logs we need for this will be from the CNI plugin not from calico-node or kube-controllers. Those are generally found in /var/log/calico/cni
on the host machine.
It's empty of directory /var/log/calico/cni
.
Does your CNI configuration at /etc/cni/net.d/10-calico.conflist include a log directory setting?
@caseydavenport
The /var/log/calico/cni
directory is empty because of the calico-node
pod specification. This can be observed in the manifest for all recent versions of Calico.
- mountPath: /var/log/calico/cni
name: cni-log-dir
readOnly: true
Directory was mounted read-only in pod:
[calico-node-pod /]# grep /var/log/calico /proc/mounts
/dev/mapper/centos-root /var/log/calico/cni xfs ro,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
Below are the logs related to accessing the read-only (ro) directory "/var/log/calico/cni":
touch: cannot touch '/var/log/calico/cni/config': Read-only file system
./run: line 5: /var/log/calico/cni/config: Read-only file system
./run: line 6: /var/log/calico/cni/config: Read-only file system
svlogd: warning: unable to lock directory: /var/log/calico/cni: read-only file system
svlogd: fatal: no functional log directories.
touch: cannot touch '/var/log/calico/cni/config': Read-only file system
./run: line 5: /var/log/calico/cni/config: Read-only file system
./run: line 6: /var/log/calico/cni/config: Read-only file system
svlogd: warning: unable to lock directory: /var/log/calico/cni: read-only file system
svlogd: fatal: no functional log directories.
touch: cannot touch '/var/log/calico/cni/config': Read-only file system
./run: line 5: /var/log/calico/cni/config: Read-only file system
./run: line 6: /var/log/calico/cni/config: Read-only file system
svlogd: warning: unable to lock directory: /var/log/calico/cni: read-only file system
svlogd: fatal: no functional log directories.
directory is empty because of the calico-node pod specification. This can be observed in the manifest for all recent versions of Calico.
I'm not sure why that directory is mounted into calico/node at all to be honest.
The logs that I am referring to aren't written by calico/node, they are written by the Calico CNI plugin which is executed on the host and doesn't use a volume mount to access that directory.
In this scenario, there are two potential issues to address concerning logging:
While here, after removing the readOnly: true
parameter from the manifest/pod specification, the following changes are observed on the host side:
host # ll /var/log/calico/cni/
total 28
-rw-r--r-- 1 root root 691 Jun 1 16:20 @400000006478c8672dd47d94.u
-rw-r--r-- 1 root root 693 Jun 1 16:33 @400000006478cac206ba4c34.u
-rw-r--r-- 1 root root 691 Jun 1 16:43 @400000006478cd143104f534.u
-rw-r--r-- 1 root root 693 Jun 1 16:53 @400000006478cf2a1745b9bc.u
-rw-r--r-- 1 root root 693 Jun 1 17:02 @400000006478d22434f47afc.u
-rw-r--r-- 1 root root 117 Jun 1 17:15 config
-rw-r--r-- 1 root root 693 Jun 1 17:15 current
-rw------- 1 root root 0 Jun 1 12:27 lock
I also encountered the same problem, is there any solution?
We'll need logs from the CNI plugin (if it's emitting them) to find out why this is occurring
netplugin failed with no error message: signal: killed"
This suggests that something is killing the CNI plugin before it can return a response - looking into kernel logs or any processes running on the cluster that might be interfering with the execution of a privileged binary (e.g., seccomp) would be another avenue to explore.
Did you setup the proxy with containerd ? (or you didnt set the "no_proxy" on containerd service) when my company need use proxy to connect Internet with "http_proxy", I meet this issue. I think that is containerd create new pod also using https_prxoy, but it didnt bypass the service-network-cidr / pod-network-cidr, and it use the https_proxy to connect kube-apiserver. I think that is ridiculous with me, anyway i wasted 2 days to resolve this thing.
@j13tw what did you do to fix? Could you share with us?
When deploying coreDNS, the pod hangs in
ContainerCreating
forever.Expected Behavior
Pod coreDNS should in state
Running
.Current Behavior
Possible Solution
Steps to Reproduce (for bugs)
1. 2. 3. 4.
Context
apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers spec: maxUnavailable: 1 selector: matchLabels: k8s-app: calico-kube-controllers
Source: calico/templates/calico-kube-controllers.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: calico-kube-controllers namespace: kube-system
Source: calico/templates/calico-node.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: calico-node namespace: kube-system
Source: calico/templates/calico-etcd-secrets.yaml
The following contains k8s Secrets for use with a TLS enabled etcd cluster.
For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1 kind: Secret type: Opaque metadata: name: calico-etcd-secrets namespace: kube-system data:
Populate the following with etcd TLS configuration if desired, but leave blank if
not using TLS for etcd.
The keys below should be uncommented and the values populated with the base64
encoded contents of each file that would be associated with the TLS data.
Example command for encoding a file contents: cat | base64 -w 0
etcd-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSVENDQWVxZ0F3SUJBZ0lVSGZySmZEM3NNUk9PVGhuajNIaU52WHpkYkJvd0NnWUlLb1pJemowRUF3SXcKRHpFTk1Bc0dBMVVFQXhNRVpYUmpaREFlRncweU16QTFNRFV3TlRRMU1EQmFGdzB6TXpBMU1ESXdOVFExTURCYQpNR1V4Q3pBSkJnTlZCQVlUQWtOT01SQXdEZ1lEVlFRSUV3ZENaV2xLYVc1bk1SQXdEZ1lEVlFRSEV3ZENaV2xLCmFXNW5NUTB3Q3dZRFZRUUtFd1JGZEdOa01RMHdDd1lEVlFRTEV3UkZkR05rTVJRd0VnWURWUVFERXd0bGRHTmsKTFhObGNuWmxjakJaTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCRlcxRzkyMGxGMmErR0hOTk9uZApMdWdCZVdiM01OemV5UDhoaDl1ZnZBSHZwRXpUUHp0SWcrVHdtejVkWEJnSWpOOE93djJNWjdTdVVxNEoxSFZUClF5Q2pnYzB3Z2Nvd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCUnVrQ1oxNG80cXY4UjlpRmFyTHkzQwovMVFUM1RBZkJnTlZIU01FR0RBV2dCVFNxVkFVWFRSMmllVElEMkN2WVBhcnlhQVYzakJMQmdOVkhSRUVSREJDCmdneHJPSE10YldGemRHVnlNREdDREdzNGN5MXRZWE4wWlhJd01vSU1hemh6TFcxaGMzUmxjakF6aHdSL0FBQUIKaHdUQXFEZzlod1RBcURnK2h3VEFxRGcvTUFvR0NDcUdTTTQ5QkFNQ0Ewa0FNRVlDSVFEQk5ueFJkbmp1T0ZCeApCdHRhdEp4Y3ZNcnA1NWxDUVBVdWdZdnRXYmJMSWdJaEFOU29ud1RFYkNTbVN1WjA5U2pUb09GWjZVSmxUbEJLCnN5b2lJdFR4VXJ4ZQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== etcd-key: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUIzd2xhQUVvVkJlbnl5R1Q5aTZTQzRDdkVET2cwb3lkb2Z4dlZIckZ3TkZvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFVmJVYjNiU1VYWnI0WWMwMDZkMHU2QUY1WnZjdzNON0kveUdIMjUrOEFlK2tUTk0vTzBpRAo1UENiUGwxY0dBaU0zdzdDL1l4bnRLNVNyZ25VZFZORElBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo= etcd-ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJZVENDQVFpZ0F3SUJBZ0lVZHhKVSs2MVV4Z3dEUjFmQ0xSUytnQk93Sndzd0NnWUlLb1pJemowRUF3SXcKRHpFTk1Bc0dBMVVFQXhNRVpYUmpaREFlRncweU16QTFNRFV3TlRRMU1EQmFGdzB6TXpBMU1ESXdOVFExTURCYQpNQTh4RFRBTEJnTlZCQU1UQkdWMFkyUXdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUWwwdlNGCjEvM1k3UllQSFN6bkhOYlcyNzVPdDNpOXg1R0hocVNPekd5bnBNYUw0akwzMDdObCtqcjM1anZQMStOaG1SUVUKbXFlLzk5bXZBRDB3clAyT28wSXdRREFPQmdOVkhROEJBZjhFQkFNQ0FRWXdEd1lEVlIwVEFRSC9CQVV3QXdFQgovekFkQmdOVkhRNEVGZ1FVMHFsUUZGMDBkb25reUE5Z3IyRDJxOG1nRmQ0d0NnWUlLb1pJemowRUF3SURSd0F3ClJBSWdhYmhGV0QvSnE5QXprbUloTklYK2RoVTNBWXdObXpIMnJ2VVNpeDMwZkc4Q0lEMG5KbmhqVCswbk5oanUKM0w0V0NiQUsrY0I3dzZydUN0NEZoaTg0blQvdAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
Source: calico/templates/calico-config.yaml
This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data:
Configure this with the location of your etcd cluster.
etcd_endpoints: "https://192.168.56.61:2379"
If you're using TLS enabled etcd uncomment the following.
You must also populate the Secret below with these files.
etcd_ca: "/calico-secrets/etcd-ca" # "/calico-secrets/etcd-ca" etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert" etcd_key: "/calico-secrets/etcd-key" # "/calico-secrets/etcd-key"
Typha is disabled.
typha_service_name: "none"
Configure the backend to use.
calico_backend: "bird"
calico_backend: "vxlan"
Configure the MTU to use for workload interfaces and tunnels.
By default, MTU is auto-detected, and explicitly setting this field should not be required.
You can override auto-detection by providing a non-zero value.
veth_mtu: "0"
The CNI network configuration to install on each node. The special
values in this config will be automatically populated.
cni_network_config: |- { "name": "k8s-pod-network", "cniVersion": "0.3.1", "plugins": [ { "type": "calico", "log_level": "info", "log_file_path": "/var/log/calico/cni/cni.log", "etcd_endpoints": "ETCD_ENDPOINTS", "etcd_key_file": "ETCD_KEY_FILE", "etcd_cert_file": "ETCD_CERT_FILE__", "etcd_ca_cert_file": "ETCD_CA_CERT_FILE", "mtu": CNI_MTU, "ipam": { "type": "calico-ipam" }, "policy": { "type": "k8s" }, "kubernetes": { "kubeconfig": "KUBECONFIG_FILEPATH__" } }, { "type": "portmap", "snat": true, "capabilities": {"portMappings": true} }, { "type": "bandwidth", "capabilities": {"bandwidth": true} } ] }
Source: calico/templates/calico-kube-controllers-rbac.yaml
Include a clusterrole for the kube-controllers component,
and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers rules:
Pods are monitored for changing labels.
The node controller monitors Kubernetes nodes.
Namespace and serviceaccount labels are used for policy.
Watch for changes to Kubernetes NetworkPolicies.
list
Source: calico/templates/calico-node-rbac.yaml
Include a clusterrole for the calico-node DaemonSet,
and bind it to the calico-node serviceaccount.
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-node rules:
Used for creating service account tokens to be used by the CNI plugin
The CNI plugin needs to get pods, nodes, and namespaces.
EndpointSlices are used for Service-based network policy rule
enforcement.
Used to discover service IPs for advertisement.
Pod CIDR auto-detection on kubeadm needs access to config maps.
Needed for clearing NodeNetworkUnavailable flag.
patch
Source: calico/templates/calico-kube-controllers-rbac.yaml
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-kube-controllers subjects:
kind: ServiceAccount name: calico-kube-controllers namespace: kube-system
Source: calico/templates/calico-node-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: calico-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-node subjects:
kind: ServiceAccount name: calico-node namespace: kube-system
Source: calico/templates/calico-node.yaml
This manifest installs the calico-node container, as well
as the CNI plugins and network config on
each master and worker node in a Kubernetes cluster.
kind: DaemonSet apiVersion: apps/v1 metadata: name: calico-node namespace: kube-system labels: k8s-app: calico-node spec: selector: matchLabels: k8s-app: calico-node updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: labels: k8s-app: calico-node spec: nodeSelector: kubernetes.io/os: linux hostNetwork: true tolerations:
Make sure calico-node gets scheduled on all nodes.
Mark the pod as a critical add-on for rescheduling.
Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0 priorityClassName: system-node-critical initContainers:
This container installs the CNI binaries
and CNI network config file on each node.
Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint optional: true env:
- name: KUBERNETES_PORT_443_TCP_ADDR
value: "192.168.56.160"
- name: KUBERNETES_PORT_443_TCP_PROTO
value: "tcp"
- name: KUBERNETES_PORT_443_TCP_PORT
value: "8443"
- name: KUBERNETES_PORT
value: "tcp://192.168.56.160:8443"
- name: KUBERNETES_PORT_443_TCP
value: "tcp://192.168.56.160:8443"
Name of the CNI config file to create.
The CNI network config to install on each node.
The location of the etcd cluster.
CNI MTU Config variable
Prevents the container from sleeping forever.
This init container mounts the necessary filesystems needed by the BPF data plane
i.e. bpf at /sys/fs/bpf and cgroup2 at /run/calico/cgroup. Calico-node initialisation is executed
in best effort fashion, i.e. no failure for errors, to not disrupt pod creation in iptable mode.
Bidirectional is required to ensure that the new mount we make at /sys/fs/bpf propagates to the host
so that it outlives the init container.
mountPropagation: Bidirectional
Bidirectional is required to ensure that the new mount we make at /run/calico/cgroup propagates to the host
so that it outlives the init container.
mountPropagation: Bidirectional
Mount /proc/ from host which usually is an init program at /nodeproc. It's needed by mountns binary,
executed by calico-node, to mount root cgroup2 fs at /run/calico/cgroup to attach CTLB programs correctly.
Runs calico-node container on each Kubernetes node. This
container programs network policy and routes on each
host.
Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint optional: true env:
- name: KUBERNETES_PORT_443_TCP_ADDR
value: "192.168.56.160"
- name: KUBERNETES_PORT_443_TCP_PROTO
value: "tcp"
- name: KUBERNETES_PORT_443_TCP_PORT
value: "8443"
- name: KUBERNETES_PORT
value: "tcp://192.168.56.160:8443"
- name: KUBERNETES_PORT_443_TCP
value: "tcp://192.168.56.160:8443"
The location of the etcd cluster.
Location of the CA certificate for etcd.
Location of the client key for etcd.
Location of the client certificate for etcd.
Set noderef for node controller.
Choose the backend to use.
Cluster type to identify the deployment type
Auto-detect the BGP IP address.
Auto-detect the BGP IP address.
Enable IPIP
value: "Never"
Enable or Disable VXLAN on the default IP pool.
value: "Never"
value: "always"
Enable or Disable VXLAN on the default IPv6 IP pool.
Set MTU for tunnel device used if ipip is enabled
Set MTU for the VXLAN tunnel device.
Set MTU for the Wireguard tunnel device.
The default IPv4 pool to create on startup if none exists. Pod IPs will be
chosen from this range. Changing this value after installation will have
no effect. This should fall within
--cluster-cidr
.Disable file logging so
kubectl logs
works.Set Felix endpoint to host default action to ACCEPT.
Disable IPv6 on Kubernetes.
For maintaining CNI plugin API credentials.
For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the
parent directory.
Used by calico-node.
mount /proc at /nodeproc to be used by mount-bpffs initContainer to mount root cgroup2 fs.
Used to install CNI.
Used to access CNI logs.
Mount in the etcd TLS secrets with mode 400.
See https://kubernetes.io/docs/concepts/configuration/secret/
Used to create per-pod Unix Domain Sockets
name: policysync hostPath: type: DirectoryOrCreate path: /var/run/nodeagent
Source: calico/templates/calico-kube-controllers.yaml
See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1 kind: Deployment metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers spec:
The controllers can only have a single active instance.
replicas: 1 selector: matchLabels: k8s-app: calico-kube-controllers strategy: type: Recreate template: metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/control-plane: control-plane tolerations:
Mark the pod as a critical add-on for rescheduling.
The controllers must run in the host network namespace so that
it isn't governed by policy that would prevent it from working.
hostNetwork: true containers:
- name: KUBERNETES_PORT_443_TCP_ADDR
value: "192.168.56.160"
- name: KUBERNETES_PORT_443_TCP_PROTO
value: "tcp"
- name: KUBERNETES_PORT_443_TCP_PORT
value: "8443"
- name: KUBERNETES_PORT
value: "tcp://192.168.56.160:8443"
- name: KUBERNETES_PORT_443_TCP
value: "tcp://192.168.56.160:8443"
The location of the etcd cluster.
Location of the CA certificate for etcd.
Location of the client key for etcd.
Location of the client certificate for etcd.
Choose which controllers to run.
Mount in the etcd TLS secrets.
Mount in the etcd TLS secrets with mode 400.
See https://kubernetes.io/docs/concepts/configuration/secret/
Your Environment
ALMALINUX_MANTISBT_PROJECT="AlmaLinux-9" ALMALINUX_MANTISBT_PROJECT_VERSION="9.1" REDHAT_SUPPORT_PRODUCT="AlmaLinux" REDHAT_SUPPORT_PRODUCT_VERSION="9.1"