easzlab / kubeasz

使用Ansible脚本安装K8S集群,介绍组件交互原理,方便直接,不受国内网络环境影响
https://github.com/easzlab/kubeasz
10.43k stars 3.51k forks source link

anolis7.9下安装到 06 network报错 #1410

Open opser-gavin opened 2 weeks ago

opser-gavin commented 2 weeks ago

What happened? 发生了什么问题?

`TASK [calico : 准备 calicoctl配置文件] *** ok: [192.168.1.104] ok: [192.168.1.105] ok: [192.168.1.103] FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (15 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (15 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (15 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (14 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (14 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (14 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (13 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (13 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (13 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (12 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (12 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (12 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (11 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (11 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (11 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (10 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (10 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (10 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (9 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (9 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (9 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (8 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (8 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (8 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (7 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (7 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (7 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (6 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (6 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (6 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (5 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (5 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (5 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (4 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (4 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (4 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (3 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (3 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (3 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (2 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (2 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (2 retries left). FAILED - RETRYING: [192.168.1.104]: 轮询等待calico-node 运行 (1 retries left). FAILED - RETRYING: [192.168.1.105]: 轮询等待calico-node 运行 (1 retries left). FAILED - RETRYING: [192.168.1.103]: 轮询等待calico-node 运行 (1 retries left).

TASK [calico : 轮询等待calico-node 运行] ***** fatal: [192.168.1.104]: FAILED! => {"attempts": 15, "changed": true, "cmd": "/etc/kubeasz/bin/kubectl get pod -n kube-system -o wide|grep 'calico-node'|grep ' master-02 '|awk '{print $3}'", "delta": "0:00:00.130962", "end": "2024-09-23 22:13:36.317503", "msg": "", "rc": 0, "start": "2024-09-23 22:13:36.186541", "stderr": "", "stderr_lines": [], "stdout": "Init:0/2", "stdout_lines": ["Init:0/2"]} ...ignoring fatal: [192.168.1.105]: FAILED! => {"attempts": 15, "changed": true, "cmd": "/etc/kubeasz/bin/kubectl get pod -n kube-system -o wide|grep 'calico-node'|grep ' worker-01 '|awk '{print $3}'", "delta": "0:00:00.134998", "end": "2024-09-23 22:13:36.422855", "msg": "", "rc": 0, "start": "2024-09-23 22:13:36.287857", "stderr": "", "stderr_lines": [], "stdout": "Init:0/2", "stdout_lines": ["Init:0/2"]} ...ignoring fatal: [192.168.1.103]: FAILED! => {"attempts": 15, "changed": true, "cmd": "/etc/kubeasz/bin/kubectl get pod -n kube-system -o wide|grep 'calico-node'|grep ' master-01 '|awk '{print $3}'", "delta": "0:00:00.093481", "end": "2024-09-23 22:13:36.484455", "msg": "", "rc": 0, "start": "2024-09-23 22:13:36.390974", "stderr": "", "stderr_lines": [], "stdout": "Init:0/2", "stdout_lines": ["Init:0/2"]} ...ignoring

PLAY RECAP *** 192.168.1.103 : ok=13 changed=6 unreachable=0 failed=0 skipped=36 rescued=0 ignored=1
192.168.1.104 : ok=7 changed=2 unreachable=0 failed=0 skipped=13 rescued=0 ignored=1
192.168.1.105 : ok=7 changed=2 unreachable=0 failed=0 skipped=13 rescued=0 ignored=1

[root@vm-102 ~]# `

What did you expect to happen? 期望的结果是什么?

顺利安装成功

How can we reproduce it (as minimally and precisely as possible)? 尽可能最小化、精确地描述如何复现问题

测试环境全新安装集群

Anything else we need to know? 其他需要说明的情况

` 9月 23 22:12:57 master-01 kubelet[22786]: E0923 22:12:57.315795 22786 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" 9月 23 22:13:00 master-01 kubelet[22786]: I0923 22:13:00.137321 22786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/calico-node-dkkhz" 9月 23 22:13:00 master-01 kubelet[22786]: I0923 22:13:00.137662 22786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/calico-node-dkkhz" 9月 23 22:13:00 master-01 containerd[11177]: time="2024-09-23T22:13:00.138289201+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dkkhz,Uid:72bc46cf-d7b9-469c-915b-7ce79823ff1a,Namespace:kube-system,Attempt:0,}" 9月 23 22:13:00 master-01 containerd[11177]: time="2024-09-23T22:13:00.148718805+08:00" level=info msg="trying next host" error="failed to do request: Head \"http://easzlab.io.local:5000/v2/easzlab/pause/manifests/3.9\": dial tcp 192.168.1.102:5000: connect: no route to host" host="easzlab.io.local:5000" 9月 23 22:13:00 master-01 containerd[11177]: time="2024-09-23T22:13:00.153729317+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dkkhz,Uid:72bc46cf-d7b9-469c-915b-7ce79823ff1a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to pull image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to pull and unpack image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to resolve reference \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to do request: Head \"http://easzlab.io.local:5000/v2/easzlab/pause/manifests/3.9\": dial tcp 192.168.1.102:5000: connect: no route to host" 9月 23 22:13:00 master-01 containerd[11177]: time="2024-09-23T22:13:00.153763214+08:00" level=info msg="stop pulling image easzlab.io.local:5000/easzlab/pause:3.9: active requests=0, bytes read=0" 9月 23 22:13:00 master-01 kubelet[22786]: E0923 22:13:00.154016 22786 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to pull image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to pull and unpack image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to resolve reference \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to do request: Head \"http://easzlab.io.local:5000/v2/easzlab/pause/manifests/3.9\": dial tcp 192.168.1.102:5000: connect: no route to host" 9月 23 22:13:00 master-01 kubelet[22786]: E0923 22:13:00.154066 22786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to pull image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to pull and unpack image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to resolve reference \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to do request: Head \"http://easzlab.io.local:5000/v2/easzlab/pause/manifests/3.9\": dial tcp 192.168.1.102:5000: connect: no route to host" pod="kube-system/calico-node-dkkhz" 9月 23 22:13:00 master-01 kubelet[22786]: E0923 22:13:00.154087 22786 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to pull image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to pull and unpack image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to resolve reference \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to do request: Head \"http://easzlab.io.local:5000/v2/easzlab/pause/manifests/3.9\": dial tcp 192.168.1.102:5000: connect: no route to host" pod="kube-system/calico-node-dkkhz" 9月 23 22:13:00 master-01 kubelet[22786]: E0923 22:13:00.154138 22786 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-node-dkkhz_kube-system(72bc46cf-d7b9-469c-915b-7ce79823ff1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\"calico-node-dkkhz_kube-system(72bc46cf-d7b9-469c-915b-7ce79823ff1a)\\": rpc error: code = Unknown desc = failed to get sandbox image \\"easzlab.io.local:5000/easzlab/pause:3.9\\": failed to pull image \\"easzlab.io.local:5000/easzlab/pause:3.9\\": failed to pull and unpack image \\"easzlab.io.local:5000/easzlab/pause:3.9\\": failed to resolve reference \\"easzlab.io.local:5000/easzlab/pause:3.9\\": failed to do request: Head \\"http://easzlab.io.local:5000/v2/easzlab/pause/manifests/3.9\\": dial tcp 192.168.1.102:5000: connect: no route to host\"" pod="kube-system/calico-node-dkkhz" podUID="72bc46cf-d7b9-469c-915b-7ce79823ff1a" 9月 23 22:13:02 master-01 kubelet[22786]: E0923 22:13:02.317149 22786 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" 9月 23 22:13:04 master-01 etcd[16342]: {"level":"warn","ts":"2024-09-23T22:13:04.185319+0800","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":6298444741512802085,"retry-timeout":"500ms"}

9月 23 22:13:04 master-01 etcd[16342]: {"level":"warn","ts":"2024-09-23T22:13:04.686072+0800","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":6298444741512802085,"retry-timeout":"500ms"}

9月 23 22:13:05 master-01 etcd[16342]: {"level":"warn","ts":"2024-09-23T22:13:05.187047+0800","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":6298444741512802085,"retry-timeout":"500ms"} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"info","ts":"2024-09-23T22:13:05.259137+0800","caller":"traceutil/trace.go:171","msg":"trace[447489324] linearizableReadLoop","detail":"{readStateIndex:1942; appliedIndex:1942; }","duration":"1.574445675s","start":"2024-09-23T22:13:03.68465+0800","end":"2024-09-23T22:13:05.259096+0800","steps":["trace[447489324] 'read index received' (duration: 1.574440782s)","trace[447489324] 'applied index is now lower than readState.Index' (duration: 3.84µs)"],"step_count":2} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"info","ts":"2024-09-23T22:13:05.259267+0800","caller":"traceutil/trace.go:171","msg":"trace[2086323578] transaction","detail":"{read_only:false; response_revision:1597; number_of_response:1; }","duration":"1.35532443s","start":"2024-09-23T22:13:03.903932+0800","end":"2024-09-23T22:13:05.259257+0800","steps":["trace[2086323578] 'process raft request' (duration: 1.355298805s)"],"step_count":1} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"warn","ts":"2024-09-23T22:13:05.25938+0800","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T22:13:03.903918+0800","time spent":"1.355399217s","remote":"192.168.1.104:47290","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":525,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/worker-01\" mod_revision:1580 > success:<request_put:<key:\"/registry/leases/kube-node-lease/worker-01\" value_size:475 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/worker-01\" > >"} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"warn","ts":"2024-09-23T22:13:05.2594+0800","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.574729276s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"} 9月 23 22:13:05 master-01 kube-apiserver[18728]: I0923 22:13:05.261291 18728 trace.go:236] Trace[995946543]: "Update" accept:application/vnd.kubernetes.protobuf, /,audit-id:42ab4615-6558-4759-ade9-7060b2dcbd25,client:192.168.1.104,api-group:coordination.k8s.io,api-version:v1,name:kube-controller-manager,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.30.1 (linux/amd64) kubernetes/6911225/leader-election,verb:PUT (23-Sep-2024 22:13:03.805) (total time: 1456ms): 9月 23 22:13:05 master-01 kube-apiserver[18728]: Trace[995946543]: ["GuaranteedUpdate etcd3" audit-id:42ab4615-6558-4759-ade9-7060b2dcbd25,key:/leases/kube-system/kube-controller-manager,type:coordination.Lease,resource:leases.coordination.k8s.io 1456ms (22:13:03.805) 9月 23 22:13:05 master-01 kube-apiserver[18728]: Trace[995946543]: ---"Txn call completed" 1455ms (22:13:05.261)] 9月 23 22:13:05 master-01 kube-apiserver[18728]: Trace[995946543]: [1.456115735s] [1.456115735s] END 9月 23 22:13:05 master-01 etcd[16342]: {"level":"info","ts":"2024-09-23T22:13:05.259442+0800","caller":"traceutil/trace.go:171","msg":"trace[1281532245] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1597; }","duration":"1.574798417s","start":"2024-09-23T22:13:03.684625+0800","end":"2024-09-23T22:13:05.259423+0800","steps":["trace[1281532245] 'agreement among raft nodes before linearized reading' (duration: 1.574640425s)"],"step_count":1} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"warn","ts":"2024-09-23T22:13:05.259813+0800","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T22:13:03.684615+0800","time spent":"1.575176877s","remote":"192.168.1.103:38956","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":2,"response size":30,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true "} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"info","ts":"2024-09-23T22:13:05.259516+0800","caller":"traceutil/trace.go:171","msg":"trace[1641283152] transaction","detail":"{read_only:false; response_revision:1596; number_of_response:1; }","duration":"1.453307548s","start":"2024-09-23T22:13:03.806199+0800","end":"2024-09-23T22:13:05.259506+0800","steps":["trace[1641283152] 'process raft request' (duration: 1.452963799s)"],"step_count":1} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"warn","ts":"2024-09-23T22:13:05.260061+0800","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T22:13:03.806183+0800","time spent":"1.453834097s","remote":"192.168.1.103:38890","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":491,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/kube-controller-manager\" mod_revision:1592 > success:<request_put:<key:\"/registry/leases/kube-system/kube-controller-manager\" value_size:431 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-controller-manager\" > >"} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"warn","ts":"2024-09-23T22:13:05.292852+0800","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.030920521s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" count_only:true ","response":"range_response_count:0 size:5"} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"info","ts":"2024-09-23T22:13:05.292916+0800","caller":"traceutil/trace.go:171","msg":"trace[1308515173] range","detail":"{range_begin:/registry/statefulsets/; range_end:/registry/statefulsets0; response_count:0; response_revision:1597; }","duration":"1.031022672s","start":"2024-09-23T22:13:04.26188+0800","end":"2024-09-23T22:13:05.292903+0800","steps":["trace[1308515173] 'agreement among raft nodes before linearized reading' (duration: 1.030911854s)"],"step_count":1} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"warn","ts":"2024-09-23T22:13:05.29296+0800","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T22:13:04.261866+0800","time spent":"1.031080268s","remote":"192.168.1.103:39022","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":28,"request content":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" count_only:true "} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"warn","ts":"2024-09-23T22:13:05.293187+0800","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.103211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true ","response":"range_response_count:0 size:5"} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"info","ts":"2024-09-23T22:13:05.293219+0800","caller":"traceutil/trace.go:171","msg":"trace[1494304915] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:1597; }","duration":"178.166502ms","start":"2024-09-23T22:13:05.115044+0800","end":"2024-09-23T22:13:05.29321+0800","steps":["trace[1494304915] 'agreement among raft nodes before linearized reading' (duration: 178.110309ms)"],"step_count":1} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"warn","ts":"2024-09-23T22:13:05.293636+0800","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"923.000277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.1.103\" ","response":"range_response_count:1 size:133"} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"info","ts":"2024-09-23T22:13:05.293709+0800","caller":"traceutil/trace.go:171","msg":"trace[1003707366] range","detail":"{range_begin:/registry/masterleases/192.168.1.103; range_end:; response_count:1; response_revision:1597; }","duration":"923.069471ms","start":"2024-09-23T22:13:04.370597+0800","end":"2024-09-23T22:13:05.293667+0800","steps":["trace[1003707366] 'agreement among raft nodes before linearized reading' (duration: 922.941551ms)"],"step_count":1} 9月 23 22:13:05 master-01 etcd[16342]: {"level":"warn","ts":"2024-09-23T22:13:05.293749+0800","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T22:13:04.370585+0800","time spent":"923.152106ms","remote":"192.168.1.103:38734","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":156,"request content":"key:\"/registry/masterleases/192.168.1.103\" "} 9月 23 22:13:05 master-01 kube-apiserver[18728]: I0923 22:13:05.328094 18728 trace.go:236] Trace[504212266]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.1.103,type:v1.Endpoints,resource:apiServerIPInfo (23-Sep-2024 22:13:04.370) (total time: 957ms): 9月 23 22:13:05 master-01 kube-apiserver[18728]: Trace[504212266]: ---"initial value restored" 924ms (22:13:05.294) 9月 23 22:13:05 master-01 kube-apiserver[18728]: Trace[504212266]: [957.970086ms] [957.970086ms] END `

Kubernetes version k8s 版本

* kubernetes: v1.30.1 * etcd: v3.5.12 * calico: v3.26.4

Kubeasz version

3.6.4

OS version 操作系统版本

```console [root@vm-102 ~]# cat /etc/os-release NAME="Anolis OS" VERSION="7.9" ID="anolis" ID_LIKE="rhel fedora centos" VERSION_ID="7.9" PRETTY_NAME="Anolis OS 7.9" ANSI_COLOR="0;31" HOME_URL="https://openanolis.cn/" BUG_REPORT_URL="https://bugs.openanolis.cn/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" [root@vm-102 ~]# uname -a Linux vm-102 3.10.0-1160.119.1.0.1.an7.x86_64 #1 SMP Thu Jun 27 09:50:34 CST 2024 x86_64 x86_64 x86_64 GNU/Linux [root@vm-102 ~]# ```

Related plugins (CNI, CSI, ...) and versions (if applicable) 其他网络插件等需要说明的情况

j4ckzh0u commented 1 day ago
9月 23 22:13:00 master-01 kubelet[22786]: E0923 22:13:00.154138 22786 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to "CreatePodSandbox" for "calico-node-dkkhz_kube-system(72bc46cf-d7b9-469c-915b-7ce79823ff1a)" with CreatePodSandboxError: "Failed to create sandbox for pod \"calico-node-dkkhz_kube-system(72bc46cf-d7b9-469c-915b-7ce79823ff1a)\": rpc error: code = Unknown desc = failed to get sandbox image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to pull image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to pull and unpack image \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to resolve reference \"easzlab.io.local:5000/easzlab/pause:3.9\": failed to do request: Head \"[http://easzlab.io.local:5000/v2/easzlab/pause/manifests/3.9\\\](http://easzlab.io.local:5000/v2/easzlab/pause/manifests/3.9%5C%5C%5C)": dial tcp 192.168.1.102:5000: connect: no route to host"" pod="kube-system/calico-node-dkkhz" podUID="72bc46cf-d7b9-469c-915b-7ce79823ff1a"

看报错是网络不通,服务器解析不到easzlab.io.local这个域名, 这个域名是在/etc/hosts中配置的。