kubernetes / kops

Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
https://kops.sigs.k8s.io/
Apache License 2.0
15.66k stars 4.61k forks source link

Kops Cluster upgrade from 1.27 to 1.29 and Kubernetes from 1.27.4 to 1.29.5 . Nodes going to NOT READY state. #16638

Open hitesh-dev19 opened 4 days ago

hitesh-dev19 commented 4 days ago

/kind bug

1. What kops version are you running? The command kops version, will display this information.

 Client version: 1.27.0 (git-v1.27.0)

**2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as

 / # kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.27.4
Kustomize Version: v5.0.1

Example output of

kubectl get nodes -o wide
NAME     STATUS   ROLES                 AGE     VERSION  INTERNAL-IP EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION              CONTAINER-RUNTIME
******   Ready    control-plane,master 6d20h   v1.27.4  *****    <none>        Amazon Linux 2   5.10.217-205.860.amzn2.x86_64   containerd://1.6.6
******   Ready    node                 6d20h   v1.27.4   *****   <none>        Amazon Linux 2   5.10.217-205.860.amzn2.x86_64   containerd://1.6.6

3. What cloud provider are you using?

AWS

4. What commands did you run? What is the simplest way to reproduce this issue?

kops edit cluster --name <cluster-name>
updated :
  **kubernetesVersion: 1.29.5**
  Added this under spec:
 **nodeTerminationHandler:
    enableRebalanceMonitoring: false
    enableSQSTerminationDraining: false**
 kops30 update cluster <cluster-name> --v 10 --yes

5. What happened after the commands executed? After the successful upgrade , terminated the nodes . All the nodes joined back the cluster but all of them are in NOT READY STATE.

kubectl get nodes -o wide                                                                                                      
NAME   STATUS     ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
XXXX   NotReady   node                   18h   v1.29.5   10.35.27.251   <none>        Amazon Linux 2   5.10.217-205.860.amzn2.x86_64   containerd://1.7.16
XXXX   NotReady   node                   18h   v1.29.5   10.35.24.146   <none>        Amazon Linux 2   5.10.217-205.860.amzn2.x86_64   containerd://1.7.16
XXXX   NotReady   node                   18h   v1.29.5   10.35.27.168   <none>        Amazon Linux 2   5.10.217-205.860.amzn2.x86_64   containerd://1.7.16
XXXX   NotReady   control-plane,master   18h   v1.29.5   10.35.27.242   <none>        Amazon Linux 2   5.10.217-205.860.amzn2.x86_64   containerd://1.7.16
XXXX   NotReady   node                   18h   v1.29.5   10.35.24.170   <none>        Amazon Linux 2   5.10.217-205.860.amzn2.x86_64   containerd://1.7.16
kubectl get pods '--field-selector=status.phase!=Running' --all-namespaces                                             
NAMESPACE              NAME                                                             READY   STATUS                  RESTARTS          AGE
kube-system            calico-kube-controllers-658dd87fb-hqpdx                          0/1     Pending                 0                 18h
kube-system            calico-node-4sjb2                                                0/1     Init:CrashLoopBackOff   218 (2m54s ago)   18h
kube-system            calico-node-77wd8                                                0/1     Init:CrashLoopBackOff   218 (4m8s ago)    18h
kube-system            calico-node-lvwmv                                                0/1     Init:CrashLoopBackOff   218 (4m19s ago)   18h
kube-system            calico-node-vdk69                                                0/1     Init:CrashLoopBackOff   218 (2m30s ago)   18h
kube-system            calico-node-vp8tt                                                0/1     Init:CrashLoopBackOff   218 (2m37s ago)   18h
kube-system            coredns-555fc79d84-5h725                                         0/1     Pending                 0                 18h
kube-system            coredns-autoscaler-74dd49dbd6-86wwb                              0/1     Pending                 0                 18h
kube-system            coredns-d4bb74bf4-6xr78                                          0/1     Pending                 0                 18h
kube-system            coredns-d4bb74bf4-jbgf9                                          0/1     Pending                 0                 18h
kube-system            ebs-csi-node-6ml48                                               0/3     ContainerCreating       0                 18h
kube-system            ebs-csi-node-bhwxb                                               0/3     ContainerCreating       0                 18h
kube-system            ebs-csi-node-d9mgh                                               0/3     ContainerCreating       0                 18h
kube-system            ebs-csi-node-tp7k6                                               0/3     ContainerCreating       0                 18h
kube-system            ebs-csi-node-ttxrj                                               0/3     ContainerCreating       0                 18h
kube-system            efs-csi-controller-5d65df8df5-kdg5z                              0/3     Pending                 0                 18h
kube-system            efs-csi-controller-5d65df8df5-z6x6d                              0/3     Pending                 0                 18h
kube-system            metrics-server-d47b5f594-dsk9c                                   0/1     Pending                 0                 18h
kube-system            npd-node-problem-detector-6bz7n                                  0/1     ContainerCreating       0                 18h
kube-system            npd-node-problem-detector-6mrgt                                  0/1     ContainerCreating       0                 18h
kube-system            npd-node-problem-detector-bhq76                                  0/1     ContainerCreating       0                 18h
kube-system            npd-node-problem-detector-djxhj                                  0/1     ContainerCreating       0                 18h
kube-system            npd-node-problem-detector-v85tr                                  0/1     ContainerCreating       0                 18h
kube-system            prodcluster-autoscaler-aws-cluster-autoscaler-856dc4f5d7-nptsd   0/1     Pending                 0                 18h
kubernetes-dashboard   dashboard-metrics-scraper-7d7cbbc9f-fqnf9                        0/1     Pending                 0                 18h
kubernetes-dashboard   kubernetes-dashboard-76f85465b4-8srb5                            0/1     Pending                 0                 18h
kubernetes-dashboard   kubernetes-dashboard-proxy-9f7ff679-q5299                        0/1     Pending                 0                 18h
nginx-alb              nginx-alb-ingress-nginx-controller-7b99bb6dd8-w97dn              0/1     Pending                 0                 18h
nginx-alb              nginx-alb-ingress-nginx-defaultbackend-c4698bdfb-m5nd4           0/1     Pending                 0                 18h
prom-operator          prometheus-promstack-kube-prometheus-prometheus-0                0/2     Pending                 0                 18h
prom-operator          promstack-kube-prometheus-operator-5ff8bbfcf4-qzrj5              0/1     Pending                 0                 18h
prom-operator          promstack-kube-state-metrics-d6447b896-sjpft                     0/1     Pending                 0                 18h
snapshot from kubectl describe master node
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 26 Jun 2024 16:44:29 -0400   Tue, 25 Jun 2024 22:31:52 -0400   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 26 Jun 2024 16:44:29 -0400   Tue, 25 Jun 2024 22:31:52 -0400   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 26 Jun 2024 16:44:29 -0400   Tue, 25 Jun 2024 22:31:52 -0400   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Wed, 26 Jun 2024 16:44:29 -0400   Tue, 25 Jun 2024 22:31:52 -0400   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
logs from the the  calico-node pod from container install-cni     
 kubectl logs calico-node-4sjb2 -n kube-system -c install-cni                                                            
2024-06-26 20:47:53.780 [INFO][1] cni-installer/<nil> <nil>: Running as a Kubernetes pod
2024-06-26 20:47:53.784 [INFO][1] cni-installer/<nil> <nil>: File is already up to date, skipping file="/host/opt/cni/bin/bandwidth"
2024-06-26 20:47:53.784 [INFO][1] cni-installer/<nil> <nil>: Installed /host/opt/cni/bin/bandwidth
2024-06-26 20:47:53.830 [INFO][1] cni-installer/<nil> <nil>: File is already up to date, skipping file="/host/opt/cni/bin/calico"
2024-06-26 20:47:53.830 [INFO][1] cni-installer/<nil> <nil>: Installed /host/opt/cni/bin/calico
2024-06-26 20:47:53.873 [INFO][1] cni-installer/<nil> <nil>: File is already up to date, skipping file="/host/opt/cni/bin/calico-ipam"
2024-06-26 20:47:53.873 [INFO][1] cni-installer/<nil> <nil>: Installed /host/opt/cni/bin/calico-ipam
2024-06-26 20:47:53.875 [INFO][1] cni-installer/<nil> <nil>: File is already up to date, skipping file="/host/opt/cni/bin/flannel"
2024-06-26 20:47:53.875 [INFO][1] cni-installer/<nil> <nil>: Installed /host/opt/cni/bin/flannel
2024-06-26 20:47:53.877 [INFO][1] cni-installer/<nil> <nil>: File is already up to date, skipping file="/host/opt/cni/bin/host-local"
2024-06-26 20:47:53.877 [INFO][1] cni-installer/<nil> <nil>: Installed /host/opt/cni/bin/host-local
2024-06-26 20:47:53.880 [INFO][1] cni-installer/<nil> <nil>: File is already up to date, skipping file="/host/opt/cni/bin/loopback"
2024-06-26 20:47:53.880 [INFO][1] cni-installer/<nil> <nil>: Installed /host/opt/cni/bin/loopback
2024-06-26 20:47:53.883 [INFO][1] cni-installer/<nil> <nil>: File is already up to date, skipping file="/host/opt/cni/bin/portmap"
2024-06-26 20:47:53.883 [INFO][1] cni-installer/<nil> <nil>: Installed /host/opt/cni/bin/portmap
2024-06-26 20:47:53.886 [INFO][1] cni-installer/<nil> <nil>: File is already up to date, skipping file="/host/opt/cni/bin/tuning"
2024-06-26 20:47:53.886 [INFO][1] cni-installer/<nil> <nil>: Installed /host/opt/cni/bin/tuning
2024-06-26 20:47:53.886 [INFO][1] cni-installer/<nil> <nil>: Wrote Calico CNI binaries to /host/opt/cni/bin

2024-06-26 20:47:53.932 [INFO][1] cni-installer/<nil> <nil>: CNI plugin version: v3.27.3
2024-06-26 20:47:53.932 [INFO][1] cni-installer/<nil> <nil>: /host/secondary-bin-dir is not writeable, skipping
2024-06-26 20:47:53.932 [WARNING][1] cni-installer/<nil> <nil>: Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2024-06-26 20:47:53.937 [ERROR][1] cni-installer/<nil> <nil>: Unable to create token for CNI kubeconfig error=Post "https://172.21.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/calico-cni-plugin/token": tls: failed to verify certificate: x509: certificate is valid for 100.64.0.1, 127.0.0.1, not 172.21.0.1
2024-06-26 20:47:53.937 [FATAL][1] cni-installer/<nil> <nil>: Unable to create token for CNI kubeconfig error=Post "https://172.21.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/calico-cni-plugin/token": tls: failed to verify certificate: x509: certificate is valid for 100.64.0.1, 127.0.0.1, not 172.21.0.1

6. What did you expect to happen?

Expected to upgrade to latest version. we also see that containerd upgraded to 1.7.16 and calico to 3.27.3 both are managed by kops directly.

7. Please provide your cluster manifest. Execute kops get --name my.example.com -o yaml to display your cluster manifest. You may want to remove your cluster name and other sensitive information.

8. Please run the commands with most verbose logging by adding the -v 10 flag. Paste the logs into this report, or in a gist and provide the gist link here.

9. Anything else do we need to know?

Some other logs from journalctl -u kubelet

Kubeproxy Logs

I0627 21:25:00.531768      12 flags.go:64] FLAG: --logging-format="text"
I0627 21:25:00.531774      12 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
I0627 21:25:00.531781      12 flags.go:64] FLAG: --masquerade-all="false"
I0627 21:25:00.531786      12 flags.go:64] FLAG: --master="https://127.0.0.1"
I0627 21:25:00.531791      12 flags.go:64] FLAG: --metrics-bind-address="127.0.0.1:10249"
I0627 21:25:00.531797      12 flags.go:64] FLAG: --metrics-port="10249"
I0627 21:25:00.531803      12 flags.go:64] FLAG: --nodeport-addresses="[]"
I0627 21:25:00.531812      12 flags.go:64] FLAG: --oom-score-adj="-998"
I0627 21:25:00.531821      12 flags.go:64] FLAG: --pod-bridge-interface=""
I0627 21:25:00.531826      12 flags.go:64] FLAG: --pod-interface-name-prefix=""
I0627 21:25:00.531831      12 flags.go:64] FLAG: --profiling="false"
I0627 21:25:00.531836      12 flags.go:64] FLAG: --proxy-mode="iptables"
I0627 21:25:00.531847      12 flags.go:64] FLAG: --proxy-port-range=""
I0627 21:25:00.531857      12 flags.go:64] FLAG: --show-hidden-metrics-for-version=""
I0627 21:25:00.531862      12 flags.go:64] FLAG: --v="2"
I0627 21:25:00.531869      12 flags.go:64] FLAG: --version="false"
I0627 21:25:00.531877      12 flags.go:64] FLAG: --vmodule=""
I0627 21:25:00.531883      12 flags.go:64] FLAG: --write-config-to=""
I0627 21:25:00.531994      12 feature_gate.go:249] feature gates: &{map[]}
E0627 21:25:00.533624      12 server.go:1039] "Failed to retrieve node info" err="Get \"https://127.0.0.1/api/v1/nodes/i-06af547b7c16506ba\": dial tcp 127.0.0.1:443: connect: connection refused"
E0627 21:25:01.552935      12 server.go:1039] "Failed to retrieve node info" err="Get \"https://127.0.0.1/api/v1/nodes/i-06af547b7c16506ba\": dial tcp 127.0.0.1:443: connect: connection refused"
E0627 21:25:03.756997      12 server.go:1039] "Failed to retrieve node info" err="Get \"https://127.0.0.1/api/v1/nodes/i-06af547b7c16506ba\": dial tcp 127.0.0.1:443: connect: connection refused"
E0627 21:25:18.409940      12 server.go:1039] "Failed to retrieve node info" err="Get \"https://127.0.0.1/api/v1/nodes/i-06af547b7c16506ba\": net/http: TLS handshake timeout"
E0627 21:25:33.571922      12 server.go:1039] "Failed to retrieve node info" err="nodes \"i-06af547b7c16506ba\" is forbidden: User \"system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope"
E0627 21:25:50.699933      12 server.go:1044] "Failed to retrieve node IPs" err="host IP unknown; known addresses: []"
I0627 21:25:50.699960      12 server.go:1020] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
I0627 21:25:50.701716      12 conntrack.go:118] "Set sysctl" entry="net/netfilter/nf_conntrack_max" value=524288
I0627 21:25:50.701755      12 conntrack.go:58] "Setting nf_conntrack_max" nfConntrackMax=524288
I0627 21:25:50.702191      12 conntrack.go:89] "Setting conntrack hashsize" conntrackHashsize=131072
I0627 21:25:50.715634      12 conntrack.go:118] "Set sysctl" entry="net/netfilter/nf_conntrack_tcp_timeout_established" value=86400
I0627 21:25:50.715711      12 conntrack.go:118] "Set sysctl" entry="net/netfilter/nf_conntrack_tcp_timeout_close_wait" value=3600
I0627 21:25:50.749738      12 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0627 21:25:50.749761      12 server_others.go:168] "Using iptables Proxier"
I0627 21:25:50.751541      12 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
I0627 21:25:50.751555      12 server_others.go:529] "Defaulting to no-op detect-local"
I0627 21:25:50.751579      12 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0627 21:25:50.751702      12 utils.go:290] "Changed sysctl" name="net/ipv4/conf/all/route_localnet" before=0 after=1
I0627 21:25:50.751756      12 proxier.go:268] "Using iptables mark for masquerade" ipFamily="IPv4" mark="0x00004000"
I0627 21:25:50.751793      12 proxier.go:304] "Iptables sync params" ipFamily="IPv4" minSyncPeriod="1s" syncPeriod="30s" burstSyncs=2
I0627 21:25:50.751829      12 proxier.go:314] "Iptables supports --random-fully" ipFamily="IPv4"
I0627 21:25:50.751857      12 proxier.go:268] "Using iptables mark for masquerade" ipFamily="IPv6" mark="0x00004000"
I0627 21:25:50.751891      12 proxier.go:304] "Iptables sync params" ipFamily="IPv6" minSyncPeriod="1s" syncPeriod="30s" burstSyncs=2
I0627 21:25:50.751915      12 proxier.go:314] "Iptables supports --random-fully" ipFamily="IPv6"
I0627 21:25:50.751947      12 server.go:865] "Version info" version="v1.29.5"
I0627 21:25:50.751953      12 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0627 21:25:50.753599      12 config.go:188] "Starting service config controller"
I0627 21:25:50.753624      12 shared_informer.go:311] Waiting for caches to sync for service config
I0627 21:25:50.753661      12 config.go:97] "Starting endpoint slice config controller"
I0627 21:25:50.753667      12 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0627 21:25:50.754599      12 config.go:315] "Starting node config controller"
I0627 21:25:50.754638      12 shared_informer.go:311] Waiting for caches to sync for node config
I0627 21:25:50.758263      12 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.0.0/tools/cache/reflector.go:229
I0627 21:25:50.758878      12 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.0.0/tools/cache/reflector.go:229
I0627 21:25:50.759006      12 proxier.go:775] "Not syncing iptables until Services and Endpoints have been received from master"
I0627 21:25:50.759031      12 proxier.go:775] "Not syncing iptables until Services and Endpoints have been received from master"
I0627 21:25:50.759160      12 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.0.0/tools/cache/reflector.go:229
I0627 21:25:50.854285      12 shared_informer.go:318] Caches are synced for service config
I0627 21:25:50.854337      12 proxier.go:775] "Not syncing iptables until Services and Endpoints have been received from master"
I0627 21:25:50.854355      12 proxier.go:775] "Not syncing iptables until Services and Endpoints have been received from master"
I0627 21:25:50.854285      12 shared_informer.go:318] Caches are synced for endpoint slice config
I0627 21:25:50.854444      12 proxier.go:798] "Syncing iptables rules"
I0627 21:25:50.854732      12 shared_informer.go:318] Caches are synced for node config
I0627 21:25:50.898476      12 proxier.go:1508] "Reloading service iptables data" numServices=5 numEndpoints=6 numFilterChains=6 numFilterRules=4 numNATChains=15 numNATRules=33
I0627 21:25:50.920025      12 proxier.go:792] "SyncProxyRules complete" elapsed="65.643759ms"
I0627 21:25:50.920047      12 proxier.go:798] "Syncing iptables rules"
I0627 21:25:50.964257      12 proxier.go:1508] "Reloading service iptables data" numServices=0 numEndpoints=0 numFilterChains=5 numFilterRules=3 numNATChains=4 numNATRules=5
I0627 21:25:50.965879      12 proxier.go:792] "SyncProxyRules complete" elapsed="45.831405ms"
I0627 21:25:54.733746      12 proxier.go:798] "Syncing iptables rules"
I0627 21:25:54.736095      12 proxier.go:1508] "Reloading service iptables data" numServices=5 numEndpoints=5 numFilterChains=6 numFilterRules=4 numNATChains=7 numNATRules=15
I0627 21:25:54.738533      12 proxier.go:792] "SyncProxyRules complete" elapsed="4.822529ms"
I0627 21:26:26.647754      12 proxier.go:798] "Syncing iptables rules"
I0627 21:26:26.647967      12 proxier.go:798] "Syncing iptables rules"
I0627 21:26:26.721877      12 proxier.go:1508] "Reloading service iptables data" numServices=0 numEndpoints=0 numFilterChains=5 numFilterRules=3 numNATChains=4 numNATRules=5
I0627 21:26:26.723396      12 proxier.go:1508] "Reloading service iptables data" numServices=5 numEndpoints=5 numFilterChains=6 numFilterRules=4 numNATChains=14 numNATRules=30
I0627 21:26:26.723961      12 proxier.go:792] "SyncProxyRules complete" elapsed="76.216944ms"
I0627 21:26:26.725945      12 proxier.go:792] "SyncProxyRules complete" elapsed="77.985702ms"
I0627 21:26:28.791855      12 proxier.go:798] "Syncing iptables rules"
I0627 21:26:28.792056      12 proxier.go:798] "Syncing iptables rules"
I0627 21:26:28.826434      12 proxier.go:1508] "Reloading service iptables data" numServices=0 numEndpoints=0 numFilterChains=5 numFilterRules=3 numNATChains=4 numNATRules=5
I0627 21:26:28.827165      12 proxier.go:1508] "Reloading service iptables data" numServices=5 numEndpoints=5 numFilterChains=6 numFilterRules=4 numNATChains=14 numNATRules=30
I0627 21:26:28.828406      12 proxier.go:792] "SyncProxyRules complete" elapsed="36.560386ms"
I0627 21:26:28.829770      12 proxier.go:792] "SyncProxyRules complete" elapsed="37.726786ms"
I0627 21:26:46.528821      12 proxier.go:798] "Syncing iptables rules"
I0627 21:26:46.533620      12 proxier.go:1508] "Reloading service iptables data" numServices=5 numEndpoints=5 numFilterChains=6 numFilterRules=7 numNATChains=10 numNATRules=13
I0627 21:26:46.561009      12 proxier.go:792] "SyncProxyRules complete" elapsed="32.250268ms"
I0627 21:26:47.025982      12 proxier.go:798] "Syncing iptables rules"
I0627 21:26:47.028352      12 proxier.go:1508] "Reloading service iptables data" numServices=5 numEndpoints=5 numFilterChains=6 numFilterRules=8 numNATChains=6 numNATRules=8
I0627 21:26:47.031739      12 proxier.go:792] "SyncProxyRules complete" elapsed="5.790537ms"
I0627 21:26:47.534081      12 proxier.go:798] "Syncing iptables rules"
I0627 21:26:47.536392      12 proxier.go:1508] "Reloading service iptables data" numServices=5 numEndpoints=2 numFilterChains=6 numFilterRules=8 numNATChains=4 numNATRules=6
I0627 21:26:47.539212      12 proxier.go:792] "SyncProxyRules complete" elapsed="5.1716ms"
I0627 21:27:46.703764      12 proxier.go:798] "Syncing iptables rules"
I0627 21:27:46.710581      12 proxier.go:1508] "Reloading service iptables data" numServices=5 numEndpoints=1 numFilterChains=6 numFilterRules=8 numNATChains=4 numNATRules=6
I0627 21:27:46.714010      12 proxier.go:792] "SyncProxyRules complete" elapsed="10.287837ms"
I0627 21:27:46.747327      12 proxier.go:798] "Syncing iptables rules"
I0627 21:27:46.752715      12 proxier.go:1508] "Reloading service iptables data" numServices=5 numEndpoints=2 numFilterChains=6 numFilterRules=8 numNATChains=4 numNATRules=6
I0627 21:27:46.759972      12 proxier.go:792] "SyncProxyRules complete" elapsed="12.677338ms"
I0627 21:28:07.049290      12 proxier.go:798] "Syncing iptables rules"
I0627 21:28:07.052428      12 proxier.go:1508] "Reloading service iptables data" numServices=5 numEndpoints=2 numFilterChains=6 numFilterRules=7 numNATChains=6 numNATRules=11
I0627 21:28:07.055400      12 proxier.go:792] "SyncProxyRules complete" elapsed="6.14307ms"
I0627 21:37:55.807818      12 proxier.go:798] "Syncing iptables rules"
I0627 21:37:55.810232      12 proxier.go:1508] "Reloading service iptables data" numServices=5 numEndpoints=2 numFilterChains=6 numFilterRules=8 numNATChains=6 numNATRules=8
I0627 21:37:55.814137      12 proxier.go:792] "SyncProxyRules complete" elapsed="6.358315ms"
I0627 21:37:56.811626      12 proxier.go:798] "Syncing iptables rules"
I0627 21:37:56.813724      12 proxier.go:1508] "Reloading service iptables data" numServices=5 numEndpoints=2 numFilterChains=6 numFilterRules=7 numNATChains=6 numNATRules=11
I0627 21:37:56.817418      12 proxier.go:792] "SyncProxyRules complete" elapsed="5.820907ms"
I0627 21:47:56.931041      12 proxier.go:798] "Syncing iptables rules"
Jun 26 02:38:31 ip-10-35-27-242.ec2.internal kubelet[4047]: E0626 02:38:31.314599    4047 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

Also noticed that on the nodes in the path /etc/cni/net.d/ , not seeing below files 10-calico.conflist calico-kubeconfig

*Able to see this files when we created the new updated cluster .

Other things : Our Environment is private, which doesn't have internet access, but able to bring up the new cluster with kops v1.29.0 and kubernetes 1.29.5 , we are seeing the problem only when we are upgrading the existing cluster .