kubernetes-sigs / kubespray

Deploy a Production Ready Kubernetes Cluster
Apache License 2.0
15.65k stars 6.36k forks source link

Calico fails to configure / to install #7129

Closed ghost closed 3 years ago

ghost commented 3 years ago

Hi there and thanks so far for this great script (kubespray).

The script runs through well, so far.

Except one thing.

It fails at the point:

TASK [network_plugin/calico : Wait for calico kubeconfig to be created] **** fatal: [node3]: FAILED! => {"changed": false, "elapsed": 300, "msg": "Timeout when waiting for file /etc/cni/net.d/calico-kubeconfig"} fatal: [node4]: FAILED! => {"changed": false, "elapsed": 300, "msg": "Timeout when waiting for file /etc/cni/net.d/calico-kubeconfig"}

By checking the path manually afterwards, on the non-master-nodes (node3 and node4), gives me:

ubuntu@node3:~$ ls /etc/cni/net.d/ calico.conflist.template

ubuntu@node4:~$ ls /etc/cni/net.d/ calico.conflist.template

..so the file calico-kubeconfig doesnt exist at all!

Should be weird if this would be a permission issue, due to i run the script as root [user].

There might already have been a related issue:

https://github.com/kubernetes-sigs/kubespray/issues/5683

...but it was not really solved, yet.

I appreciate your help and I would like to contribute to this awesome project (kubespray) in the future as well!

Cheers

Marco

P.s. please excuse if this bug report is formally not absolutely correct - I'm new to GitHub! ;)

Environment:

Kubespray version (commit) (git rev-parse --short HEAD): ff952924

Network plugin used:

Full inventory with variables (ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"):

node1 | SUCCESS => { "hostvars[inventory_hostname]": { "access_ip": "192.168.2.115", "ansible_check_mode": false, "ansible_diff_mode": false, "ansible_facts": {}, "ansible_forks": 5, "ansible_host": "192.168.2.115", "ansible_inventory_sources": [ "/home/marco/kubespray/inventory/mycluster/hosts.yaml" ], "ansible_playbook_python": "/home/marco/.pyenv/versions/3.9.1/bin/python3.9", "ansible_verbosity": 0, "ansible_version": { "full": "2.9.6", "major": 2, "minor": 9, "revision": 6, "string": "2.9.6" }, "bin_dir": "/usr/local/bin", "cephfs_provisioner_enabled": false, "cert_manager_enabled": false, "cluster_name": "cluster.local", "container_manager": "docker", "coredns_k8s_external_zone": "k8s_external.local", "credentials_dir": "/home/marco/kubespray/inventory/mycluster/credentials", "default_kubelet_config_dir": "/etc/kubernetes/dynamic_kubelet_dir", "deploy_netchecker": false, "dns_domain": "cluster.local", "dns_mode": "coredns", "docker_bin_dir": "/usr/bin", "docker_container_storage_setup": false, "docker_daemon_graph": "/var/lib/docker", "docker_dns_servers_strict": false, "docker_iptables_enabled": "false", "docker_log_opts": "--log-opt max-size=50m --log-opt max-file=5", "docker_rpm_keepcache": 0, "dynamic_kubelet_configuration": false, "dynamic_kubelet_configuration_dir": "/etc/kubernetes/dynamic_kubelet_dir", "enable_coredns_k8s_endpoint_pod_names": false, "enable_coredns_k8s_external": false, "enable_nat_default_gateway": true, "enable_nodelocaldns": true, "etcd_data_dir": "/var/lib/etcd", "etcd_deployment_type": "docker", "etcd_kubeadm_enabled": false, "event_ttl_duration": "1h0m0s", "force_certificate_regeneration": false, "group_names": [ "etcd", "k8s-cluster", "kube-master", "kube-node" ], "groups": { "all": [ "node1", "node2", "node3", "node4" ], "calico-rr": [], "etcd": [ "node1", "node2", "node3" ], "k8s-cluster": [ "node1", "node2", "node3", "node4" ], "kube-master": [ "node1", "node2" ], "kube-node": [ "node1", "node2", "node3", "node4" ], "ungrouped": [] }, "helm_enabled": false, "ingress_alb_enabled": false, "ingress_ambassador_enabled": false, "ingress_nginx_enabled": false, "ingress_publish_status_address": "", "inventory_dir": "/home/marco/kubespray/inventory/mycluster", "inventory_file": "/home/marco/kubespray/inventory/mycluster/hosts.yaml", "inventory_hostname": "node1", "inventory_hostname_short": "node1", "ip": "192.168.2.115", "k8s_image_pull_policy": "IfNotPresent", "kata_containers_enabled": false, "kube_api_anonymous_auth": true, "kube_apiserver_insecure_port": 0, "kube_apiserver_ip": "10.233.0.1", "kube_apiserver_port": 6443, "kube_cert_dir": "/etc/kubernetes/ssl", "kube_cert_group": "kube-cert", "kube_config_dir": "/etc/kubernetes", "kube_encrypt_secret_data": false, "kube_log_level": 2, "kube_manifest_dir": "/etc/kubernetes/manifests", "kube_network_node_prefix": 24, "kube_network_plugin": "calico", "kube_network_plugin_multus": false, "kube_pods_subnet": "10.233.64.0/18", "kube_proxy_mode": "ipvs", "kube_proxy_nodeport_addresses": [], "kube_proxy_strict_arp": false, "kube_script_dir": "/usr/local/bin/kubernetes-scripts", "kube_service_addresses": "10.233.0.0/18", "kube_token_dir": "/etc/kubernetes/tokens", "kube_version": "v1.19.6", "kubeadm_certificate_key": "eacdef11bd212e5fa6bf513c15fa607ebeff3698aaade46eb5e6baa600ee1e9a", "kubernetes_audit": false, "loadbalancer_apiserver_healthcheck_port": 8081, "loadbalancer_apiserver_port": 6443, "local_path_provisioner_enabled": false, "local_release_dir": "/tmp/releases", "local_volume_provisioner_enabled": false, "macvlan_interface": "eth1", "metallb_enabled": false, "metrics_server_enabled": false, "ndots": 2, "no_proxy_exclude_workers": false, "nodelocaldns_health_port": 9254, "nodelocaldns_ip": "169.254.25.10", "omit": "omit_place_holdera1f9366412dbba2c0c20b38f192197f2042bb27a", "persistent_volumes_enabled": false, "playbook_dir": "/home/marco/kubespray", "podsecuritypolicy_enabled": false, "rbd_provisioner_enabled": false, "registry_enabled": false, "resolvconf_mode": "docker_dns", "retry_stagger": 5, "skydns_server": "10.233.0.3", "skydns_server_secondary": "10.233.0.4", "volume_cross_zone_attachment": false } } node2 | SUCCESS => { "hostvars[inventory_hostname]": { "access_ip": "192.168.2.117", "ansible_check_mode": false, "ansible_diff_mode": false, "ansible_facts": {}, "ansible_forks": 5, "ansible_host": "192.168.2.117", "ansible_inventory_sources": [ "/home/marco/kubespray/inventory/mycluster/hosts.yaml" ], "ansible_playbook_python": "/home/marco/.pyenv/versions/3.9.1/bin/python3.9", "ansible_verbosity": 0, "ansible_version": { "full": "2.9.6", "major": 2, "minor": 9, "revision": 6, "string": "2.9.6" }, "bin_dir": "/usr/local/bin", "cephfs_provisioner_enabled": false, "cert_manager_enabled": false, "cluster_name": "cluster.local", "container_manager": "docker", "coredns_k8s_external_zone": "k8s_external.local", "credentials_dir": "/home/marco/kubespray/inventory/mycluster/credentials", "default_kubelet_config_dir": "/etc/kubernetes/dynamic_kubelet_dir", "deploy_netchecker": false, "dns_domain": "cluster.local", "dns_mode": "coredns", "docker_bin_dir": "/usr/bin", "docker_container_storage_setup": false, "docker_daemon_graph": "/var/lib/docker", "docker_dns_servers_strict": false, "docker_iptables_enabled": "false", "docker_log_opts": "--log-opt max-size=50m --log-opt max-file=5", "docker_rpm_keepcache": 0, "dynamic_kubelet_configuration": false, "dynamic_kubelet_configuration_dir": "/etc/kubernetes/dynamic_kubelet_dir", "enable_coredns_k8s_endpoint_pod_names": false, "enable_coredns_k8s_external": false, "enable_nat_default_gateway": true, "enable_nodelocaldns": true, "etcd_data_dir": "/var/lib/etcd", "etcd_deployment_type": "docker", "etcd_kubeadm_enabled": false, "event_ttl_duration": "1h0m0s", "force_certificate_regeneration": false, "group_names": [ "etcd", "k8s-cluster", "kube-master", "kube-node" ], "groups": { "all": [ "node1", "node2", "node3", "node4" ], "calico-rr": [], "etcd": [ "node1", "node2", "node3" ], "k8s-cluster": [ "node1", "node2", "node3", "node4" ], "kube-master": [ "node1", "node2" ], "kube-node": [ "node1", "node2", "node3", "node4" ], "ungrouped": [] }, "helm_enabled": false, "ingress_alb_enabled": false, "ingress_ambassador_enabled": false, "ingress_nginx_enabled": false, "ingress_publish_status_address": "", "inventory_dir": "/home/marco/kubespray/inventory/mycluster", "inventory_file": "/home/marco/kubespray/inventory/mycluster/hosts.yaml", "inventory_hostname": "node2", "inventory_hostname_short": "node2", "ip": "192.168.2.117", "k8s_image_pull_policy": "IfNotPresent", "kata_containers_enabled": false, "kube_api_anonymous_auth": true, "kube_apiserver_insecure_port": 0, "kube_apiserver_ip": "10.233.0.1", "kube_apiserver_port": 6443, "kube_cert_dir": "/etc/kubernetes/ssl", "kube_cert_group": "kube-cert", "kube_config_dir": "/etc/kubernetes", "kube_encrypt_secret_data": false, "kube_log_level": 2, "kube_manifest_dir": "/etc/kubernetes/manifests", "kube_network_node_prefix": 24, "kube_network_plugin": "calico", "kube_network_plugin_multus": false, "kube_pods_subnet": "10.233.64.0/18", "kube_proxy_mode": "ipvs", "kube_proxy_nodeport_addresses": [], "kube_proxy_strict_arp": false, "kube_script_dir": "/usr/local/bin/kubernetes-scripts", "kube_service_addresses": "10.233.0.0/18", "kube_token_dir": "/etc/kubernetes/tokens", "kube_version": "v1.19.6", "kubeadm_certificate_key": "eacdef11bd212e5fa6bf513c15fa607ebeff3698aaade46eb5e6baa600ee1e9a", "kubernetes_audit": false, "loadbalancer_apiserver_healthcheck_port": 8081, "loadbalancer_apiserver_port": 6443, "local_path_provisioner_enabled": false, "local_release_dir": "/tmp/releases", "local_volume_provisioner_enabled": false, "macvlan_interface": "eth1", "metallb_enabled": false, "metrics_server_enabled": false, "ndots": 2, "no_proxy_exclude_workers": false, "nodelocaldns_health_port": 9254, "nodelocaldns_ip": "169.254.25.10", "omit": "omit_place_holdera1f9366412dbba2c0c20b38f192197f2042bb27a", "persistent_volumes_enabled": false, "playbook_dir": "/home/marco/kubespray", "podsecuritypolicy_enabled": false, "rbd_provisioner_enabled": false, "registry_enabled": false, "resolvconf_mode": "docker_dns", "retry_stagger": 5, "skydns_server": "10.233.0.3", "skydns_server_secondary": "10.233.0.4", "volume_cross_zone_attachment": false } } node3 | SUCCESS => { "hostvars[inventory_hostname]": { "access_ip": "192.168.2.116", "ansible_check_mode": false, "ansible_diff_mode": false, "ansible_facts": {}, "ansible_forks": 5, "ansible_host": "192.168.2.116", "ansible_inventory_sources": [ "/home/marco/kubespray/inventory/mycluster/hosts.yaml" ], "ansible_playbook_python": "/home/marco/.pyenv/versions/3.9.1/bin/python3.9", "ansible_verbosity": 0, "ansible_version": { "full": "2.9.6", "major": 2, "minor": 9, "revision": 6, "string": "2.9.6" }, "bin_dir": "/usr/local/bin", "cephfs_provisioner_enabled": false, "cert_manager_enabled": false, "cluster_name": "cluster.local", "container_manager": "docker", "coredns_k8s_external_zone": "k8s_external.local", "credentials_dir": "/home/marco/kubespray/inventory/mycluster/credentials", "default_kubelet_config_dir": "/etc/kubernetes/dynamic_kubelet_dir", "deploy_netchecker": false, "dns_domain": "cluster.local", "dns_mode": "coredns", "docker_bin_dir": "/usr/bin", "docker_container_storage_setup": false, "docker_daemon_graph": "/var/lib/docker", "docker_dns_servers_strict": false, "docker_iptables_enabled": "false", "docker_log_opts": "--log-opt max-size=50m --log-opt max-file=5", "docker_rpm_keepcache": 0, "dynamic_kubelet_configuration": false, "dynamic_kubelet_configuration_dir": "/etc/kubernetes/dynamic_kubelet_dir", "enable_coredns_k8s_endpoint_pod_names": false, "enable_coredns_k8s_external": false, "enable_nat_default_gateway": true, "enable_nodelocaldns": true, "etcd_data_dir": "/var/lib/etcd", "etcd_deployment_type": "docker", "etcd_kubeadm_enabled": false, "event_ttl_duration": "1h0m0s", "force_certificate_regeneration": false, "group_names": [ "etcd", "k8s-cluster", "kube-node" ], "groups": { "all": [ "node1", "node2", "node3", "node4" ], "calico-rr": [], "etcd": [ "node1", "node2", "node3" ], "k8s-cluster": [ "node1", "node2", "node3", "node4" ], "kube-master": [ "node1", "node2" ], "kube-node": [ "node1", "node2", "node3", "node4" ], "ungrouped": [] }, "helm_enabled": false, "ingress_alb_enabled": false, "ingress_ambassador_enabled": false, "ingress_nginx_enabled": false, "ingress_publish_status_address": "", "inventory_dir": "/home/marco/kubespray/inventory/mycluster", "inventory_file": "/home/marco/kubespray/inventory/mycluster/hosts.yaml", "inventory_hostname": "node3", "inventory_hostname_short": "node3", "ip": "192.168.2.116", "k8s_image_pull_policy": "IfNotPresent", "kata_containers_enabled": false, "kube_api_anonymous_auth": true, "kube_apiserver_insecure_port": 0, "kube_apiserver_ip": "10.233.0.1", "kube_apiserver_port": 6443, "kube_cert_dir": "/etc/kubernetes/ssl", "kube_cert_group": "kube-cert", "kube_config_dir": "/etc/kubernetes", "kube_encrypt_secret_data": false, "kube_log_level": 2, "kube_manifest_dir": "/etc/kubernetes/manifests", "kube_network_node_prefix": 24, "kube_network_plugin": "calico", "kube_network_plugin_multus": false, "kube_pods_subnet": "10.233.64.0/18", "kube_proxy_mode": "ipvs", "kube_proxy_nodeport_addresses": [], "kube_proxy_strict_arp": false, "kube_script_dir": "/usr/local/bin/kubernetes-scripts", "kube_service_addresses": "10.233.0.0/18", "kube_token_dir": "/etc/kubernetes/tokens", "kube_version": "v1.19.6", "kubeadm_certificate_key": "eacdef11bd212e5fa6bf513c15fa607ebeff3698aaade46eb5e6baa600ee1e9a", "kubernetes_audit": false, "loadbalancer_apiserver_healthcheck_port": 8081, "loadbalancer_apiserver_port": 6443, "local_path_provisioner_enabled": false, "local_release_dir": "/tmp/releases", "local_volume_provisioner_enabled": false, "macvlan_interface": "eth1", "metallb_enabled": false, "metrics_server_enabled": false, "ndots": 2, "no_proxy_exclude_workers": false, "nodelocaldns_health_port": 9254, "nodelocaldns_ip": "169.254.25.10", "omit": "omit_place_holdera1f9366412dbba2c0c20b38f192197f2042bb27a", "persistent_volumes_enabled": false, "playbook_dir": "/home/marco/kubespray", "podsecuritypolicy_enabled": false, "rbd_provisioner_enabled": false, "registry_enabled": false, "resolvconf_mode": "docker_dns", "retry_stagger": 5, "skydns_server": "10.233.0.3", "skydns_server_secondary": "10.233.0.4", "volume_cross_zone_attachment": false } } node4 | SUCCESS => { "hostvars[inventory_hostname]": { "access_ip": "192.168.2.118", "ansible_check_mode": false, "ansible_diff_mode": false, "ansible_facts": {}, "ansible_forks": 5, "ansible_host": "192.168.2.118", "ansible_inventory_sources": [ "/home/marco/kubespray/inventory/mycluster/hosts.yaml" ], "ansible_playbook_python": "/home/marco/.pyenv/versions/3.9.1/bin/python3.9", "ansible_verbosity": 0, "ansible_version": { "full": "2.9.6", "major": 2, "minor": 9, "revision": 6, "string": "2.9.6" }, "bin_dir": "/usr/local/bin", "cephfs_provisioner_enabled": false, "cert_manager_enabled": false, "cluster_name": "cluster.local", "container_manager": "docker", "coredns_k8s_external_zone": "k8s_external.local", "credentials_dir": "/home/marco/kubespray/inventory/mycluster/credentials", "default_kubelet_config_dir": "/etc/kubernetes/dynamic_kubelet_dir", "deploy_netchecker": false, "dns_domain": "cluster.local", "dns_mode": "coredns", "docker_bin_dir": "/usr/bin", "docker_container_storage_setup": false, "docker_daemon_graph": "/var/lib/docker", "docker_dns_servers_strict": false, "docker_iptables_enabled": "false", "docker_log_opts": "--log-opt max-size=50m --log-opt max-file=5", "docker_rpm_keepcache": 0, "dynamic_kubelet_configuration": false, "dynamic_kubelet_configuration_dir": "/etc/kubernetes/dynamic_kubelet_dir", "enable_coredns_k8s_endpoint_pod_names": false, "enable_coredns_k8s_external": false, "enable_nat_default_gateway": true, "enable_nodelocaldns": true, "etcd_data_dir": "/var/lib/etcd", "etcd_kubeadm_enabled": false, "event_ttl_duration": "1h0m0s", "force_certificate_regeneration": false, "group_names": [ "k8s-cluster", "kube-node" ], "groups": { "all": [ "node1", "node2", "node3", "node4" ], "calico-rr": [], "etcd": [ "node1", "node2", "node3" ], "k8s-cluster": [ "node1", "node2", "node3", "node4" ], "kube-master": [ "node1", "node2" ], "kube-node": [ "node1", "node2", "node3", "node4" ], "ungrouped": [] }, "helm_enabled": false, "ingress_alb_enabled": false, "ingress_ambassador_enabled": false, "ingress_nginx_enabled": false, "ingress_publish_status_address": "", "inventory_dir": "/home/marco/kubespray/inventory/mycluster", "inventory_file": "/home/marco/kubespray/inventory/mycluster/hosts.yaml", "inventory_hostname": "node4", "inventory_hostname_short": "node4", "ip": "192.168.2.118", "k8s_image_pull_policy": "IfNotPresent", "kata_containers_enabled": false, "kube_api_anonymous_auth": true, "kube_apiserver_insecure_port": 0, "kube_apiserver_ip": "10.233.0.1", "kube_apiserver_port": 6443, "kube_cert_dir": "/etc/kubernetes/ssl", "kube_cert_group": "kube-cert", "kube_config_dir": "/etc/kubernetes", "kube_encrypt_secret_data": false, "kube_log_level": 2, "kube_manifest_dir": "/etc/kubernetes/manifests", "kube_network_node_prefix": 24, "kube_network_plugin": "calico", "kube_network_plugin_multus": false, "kube_pods_subnet": "10.233.64.0/18", "kube_proxy_mode": "ipvs", "kube_proxy_nodeport_addresses": [], "kube_proxy_strict_arp": false, "kube_script_dir": "/usr/local/bin/kubernetes-scripts", "kube_service_addresses": "10.233.0.0/18", "kube_token_dir": "/etc/kubernetes/tokens", "kube_version": "v1.19.6", "kubeadm_certificate_key": "eacdef11bd212e5fa6bf513c15fa607ebeff3698aaade46eb5e6baa600ee1e9a", "kubernetes_audit": false, "loadbalancer_apiserver_healthcheck_port": 8081, "loadbalancer_apiserver_port": 6443, "local_path_provisioner_enabled": false, "local_release_dir": "/tmp/releases", "local_volume_provisioner_enabled": false, "macvlan_interface": "eth1", "metallb_enabled": false, "metrics_server_enabled": false, "ndots": 2, "no_proxy_exclude_workers": false, "nodelocaldns_health_port": 9254, "nodelocaldns_ip": "169.254.25.10", "omit": "omit_place_holdera1f9366412dbba2c0c20b38f192197f2042bb27a", "persistent_volumes_enabled": false, "playbook_dir": "/home/marco/kubespray", "podsecuritypolicy_enabled": false, "rbd_provisioner_enabled": false, "registry_enabled": false, "resolvconf_mode": "docker_dns", "retry_stagger": 5, "skydns_server": "10.233.0.3", "skydns_server_secondary": "10.233.0.4", "volume_cross_zone_attachment": false } } Command used to invoke ansible: ansible-playbook -i inventory/mycluster/hosts.yaml --user=root cluster.yml

Output of ansible run:

Anything else do we need to know:

takamori-tech commented 3 years ago

I met the issue like this on my Raspberry Pi 4B Cluster.

Caused by Calico registry definition in Kubespray doesn't check architecture. quay.io hasn't multiarch functionality, so image tag requires suffix -arm64.

To fix this issue in temporally, I just change YAML definition to below: kubespray/roles/download/defaults/main.yml :

calico_node_image_tag: "{{ calico_version }}-arm64"
calico_cni_image_tag: "{{ calico_cni_version }}-arm64"
calico_policy_image_tag: "{{ calico_policy_version }}-arm64"
calico_typha_image_tag: "{{ calico_typha_version }}-arm64"
ghost commented 3 years ago

@takamori-tech Thanks a bunch! This really did the trick! Now the script runs through without any issues on my 4 Raspberry Pi's!

Maybe I/we could see this for a chance of improvement to do the declaration of the arch at a central point?

Cheers

Marco

takamori-tech commented 3 years ago

To fix this issue permanently, I think Calico image tags should be below like etcd pulling from quay.io. kubespray/roles/download/defaults/main.yml :

calico_node_image_tag: "{{ calico_version }}{%- if image_arch != 'amd64' -%}-{{ image_arch }}{%- endif -%}"
calico_cni_image_tag: "{{ calico_cni_version }}{%- if image_arch != 'amd64' -%}-{{ image_arch }}{%- endif -%}"
calico_policy_image_tag: "{{ calico_policy_version }}{%- if image_arch != 'amd64' -%}-{{ image_arch }}{%- endif -%}"
calico_typha_image_tag: "{{ calico_typha_version }}{%- if image_arch != 'amd64' -%}-{{ image_arch }}{%- endif -%}"
dong8650 commented 3 months ago

First of all, thank you for being able to find a clue.

However, the route you told me and I are different. I found the string with grep -r, added -arm64 and ran answerable again, but I get the error as below.

By any chance, I'd like to ask if there are any other additional measures.

[ Path ]

root@test01-dc1:~# grep -r calico_node_image_tag kubespray/roles/network_plugin/calico/templates/calico-node.yml.j2: image: {{ calico_node_image_repo }}:{{ calico_node_image_tag }} kubespray/roles/kubespray-defaults/defaults/main/download.yml:calico_node_image_tag: "{{ calico_version }}-arm64" kubespray/roles/kubespray-defaults/defaults/main/download.yml: tag: "{{ calico_node_image_tag }}"

root@test01-dc1:~# grep -r calico_cni_image_tag kubespray/roles/network_plugin/calico/templates/calico-node.yml.j2: image: {{ calico_cni_image_repo }}:{{ calico_cni_image_tag }} kubespray/roles/network_plugin/calico/templates/calico-node.yml.j2: image: {{ calico_cni_image_repo }}:{{ calico_cni_image_tag }} kubespray/roles/kubespray-defaults/defaults/main/download.yml:calico_cni_image_tag: "{{ calico_cni_version }}-arm64" kubespray/roles/kubespray-defaults/defaults/main/download.yml: tag: "{{ calico_cni_image_tag }}"

root@test01-dc1:~# grep -r calico_policy_image_tag kubespray/roles/kubernetes-apps/policy_controller/calico/templates/calico-kube-controllers.yml.j2: image: {{ calico_policy_image_repo }}:{{ calico_policy_image_tag }} kubespray/roles/kubespray-defaults/defaults/main/download.yml:calico_policy_image_tag: "{{ calico_policy_version }}-arm64" kubespray/roles/kubespray-defaults/defaults/main/download.yml: tag: "{{ calico_policy_image_tag }}"

root@test01-dc1:~# grep -r calico_typha_image_tag kubespray/roles/network_plugin/calico/templates/calico-typha.yml.j2: - image: {{ calico_typha_image_repo }}:{{ calico_typha_image_tag }} kubespray/roles/kubespray-defaults/defaults/main/download.yml:calico_typha_image_tag: "{{ calico_typha_version }}-arm64" kubespray/roles/kubespray-defaults/defaults/main/download.yml: tag: "{{ calico_typha_image_tag }}"

============================================================

Using module file /usr/local/lib/python3.10/dist-packages/ansible/modules/wait_for.py Pipelining is enabled. <100.100.100.211> ESTABLISH SSH CONNECTION FOR USER: None <100.100.100.211> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/c82cd758b9"' 100.100.100.211 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-aqcxjmbotsytlhvimsmomuusurfyilyr ; ALL_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' FTP_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' HTTPS_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' HTTP_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' NO_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' all_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' ftp_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' http_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' https_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' no_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Escalation succeeded Using module file /usr/local/lib/python3.10/dist-packages/ansible/modules/wait_for.py Pipelining is enabled. <100.100.100.212> ESTABLISH SSH CONNECTION FOR USER: None <100.100.100.212> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/c740269957"' 100.100.100.212 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-hfwtjmwlngpztumqkeerlyoceowtkedp ; ALL_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' FTP_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' HTTPS_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' HTTP_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' NO_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' all_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' ftp_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' http_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' https_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' no_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Escalation succeeded Using module file /usr/local/lib/python3.10/dist-packages/ansible/modules/wait_for.py Pipelining is enabled. <100.100.100.213> ESTABLISH SSH CONNECTION FOR USER: None <100.100.100.213> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/0f7c4235a1"' 100.100.100.213 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-naenlkmwkclssfxilovajdnbtzcexssk ; ALL_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' FTP_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' HTTPS_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' HTTP_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' NO_PROXY='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' all_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' ftp_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' http_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' https_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' no_proxy='"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"' /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Escalation succeeded <100.100.100.211> (1, b'\n{"elapsed": 300, "failed": true, "msg": "Timeout when waiting for file /etc/cni/net.d/calico-kubeconfig", "invocation": {"module_args": {"path": "/etc/cni/net.d/calico-kubeconfig", "timeout": 300, "host": "127.0.0.1", "connect_timeout": 5, "delay": 0, "active_connection_states": ["ESTABLISHED", "FIN_WAIT1", "FIN_WAIT2", "SYN_RECV", "SYN_SENT", "TIME_WAIT"], "state": "started", "sleep": 1, "port": null, "search_regex": null, "exclude_hosts": null, "msg": null}}}\n', b'OpenSSH_8.9p1 Ubuntu-3ubuntu0.6, OpenSSL 3.0.2 15 Mar 2022\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for \r\ndebug2: resolve_canonicalize: hostname 100.100.100.211 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 75229\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n') <100.100.100.211> Failed to connect to the host via ssh: OpenSSH_8.9p1 Ubuntu-3ubuntu0.6, OpenSSL 3.0.2 15 Mar 2022 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/.conf matched no files debug1: /etc/ssh/ssh_config line 21: Applying options for debug2: resolve_canonicalize: hostname 100.100.100.211 is address debug1: auto-mux: Trying existing master debug2: fd 3 setting O_NONBLOCK debug2: mux_client_hello_exchange: master version 4 debug3: mux_client_forwards: request forwardings: 0 local, 0 remote debug3: mux_client_request_session: entering debug3: mux_client_request_alive: entering debug3: mux_client_request_alive: done pid = 75229 debug3: mux_client_request_session: session request sent debug1: mux_client_request_session: master session id: 2 debug3: mux_client_read_packet: read header failed: Broken pipe debug2: Received exit status from master 1

TASK [network_plugin/calico : Wait for calico kubeconfig to be created] **** task path: /root/kubespray/roles/network_plugin/calico/tasks/install.yml:460 fatal: [k8s-worker01-dc1]: FAILED! => { "changed": false, "elapsed": 300, "invocation": { "module_args": { "active_connection_states": [ "ESTABLISHED", "FIN_WAIT1", "FIN_WAIT2", "SYN_RECV", "SYN_SENT", "TIME_WAIT" ], "connect_timeout": 5, "delay": 0, "exclude_hosts": null, "host": "127.0.0.1", "msg": null, "path": "/etc/cni/net.d/calico-kubeconfig", "port": null, "search_regex": null, "sleep": 1, "state": "started", "timeout": 300 } }, "msg": "Timeout when waiting for file /etc/cni/net.d/calico-kubeconfig" } <100.100.100.212> (1, b'\n{"elapsed": 300, "failed": true, "msg": "Timeout when waiting for file /etc/cni/net.d/calico-kubeconfig", "invocation": {"module_args": {"path": "/etc/cni/net.d/calico-kubeconfig", "timeout": 300, "host": "127.0.0.1", "connect_timeout": 5, "delay": 0, "active_connection_states": ["ESTABLISHED", "FIN_WAIT1", "FIN_WAIT2", "SYN_RECV", "SYN_SENT", "TIME_WAIT"], "state": "started", "sleep": 1, "port": null, "search_regex": null, "exclude_hosts": null, "msg": null}}}\n', b'OpenSSH_8.9p1 Ubuntu-3ubuntu0.6, OpenSSL 3.0.2 15 Mar 2022\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for \r\ndebug2: resolve_canonicalize: hostname 100.100.100.212 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 75232\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n') <100.100.100.212> Failed to connect to the host via ssh: OpenSSH_8.9p1 Ubuntu-3ubuntu0.6, OpenSSL 3.0.2 15 Mar 2022 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/.conf matched no files debug1: /etc/ssh/ssh_config line 21: Applying options for debug2: resolve_canonicalize: hostname 100.100.100.212 is address debug1: auto-mux: Trying existing master debug2: fd 3 setting O_NONBLOCK debug2: mux_client_hello_exchange: master version 4 debug3: mux_client_forwards: request forwardings: 0 local, 0 remote debug3: mux_client_request_session: entering debug3: mux_client_request_alive: entering debug3: mux_client_request_alive: done pid = 75232 debug3: mux_client_request_session: session request sent debug1: mux_client_request_session: master session id: 2 debug3: mux_client_read_packet: read header failed: Broken pipe debug2: Received exit status from master 1 fatal: [k8s-worker02-dc1]: FAILED! => { "changed": false, "elapsed": 300, "invocation": { "module_args": { "active_connection_states": [ "ESTABLISHED", "FIN_WAIT1", "FIN_WAIT2", "SYN_RECV", "SYN_SENT", "TIME_WAIT" ], "connect_timeout": 5, "delay": 0, "exclude_hosts": null, "host": "127.0.0.1", "msg": null, "path": "/etc/cni/net.d/calico-kubeconfig", "port": null, "search_regex": null, "sleep": 1, "state": "started", "timeout": 300 } }, "msg": "Timeout when waiting for file /etc/cni/net.d/calico-kubeconfig" } <100.100.100.213> (1, b'\n{"elapsed": 300, "failed": true, "msg": "Timeout when waiting for file /etc/cni/net.d/calico-kubeconfig", "invocation": {"module_args": {"path": "/etc/cni/net.d/calico-kubeconfig", "timeout": 300, "host": "127.0.0.1", "connect_timeout": 5, "delay": 0, "active_connection_states": ["ESTABLISHED", "FIN_WAIT1", "FIN_WAIT2", "SYN_RECV", "SYN_SENT", "TIME_WAIT"], "state": "started", "sleep": 1, "port": null, "search_regex": null, "exclude_hosts": null, "msg": null}}}\n', b'OpenSSH_8.9p1 Ubuntu-3ubuntu0.6, OpenSSL 3.0.2 15 Mar 2022\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for \r\ndebug2: resolve_canonicalize: hostname 100.100.100.213 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 75237\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n') <100.100.100.213> Failed to connect to the host via ssh: OpenSSH_8.9p1 Ubuntu-3ubuntu0.6, OpenSSL 3.0.2 15 Mar 2022 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/.conf matched no files debug1: /etc/ssh/ssh_config line 21: Applying options for debug2: resolve_canonicalize: hostname 100.100.100.213 is address debug1: auto-mux: Trying existing master debug2: fd 3 setting O_NONBLOCK debug2: mux_client_hello_exchange: master version 4 debug3: mux_client_forwards: request forwardings: 0 local, 0 remote debug3: mux_client_request_session: entering debug3: mux_client_request_alive: entering debug3: mux_client_request_alive: done pid = 75237 debug3: mux_client_request_session: session request sent debug1: mux_client_request_session: master session id: 2 debug3: mux_client_read_packet: read header failed: Broken pipe debug2: Received exit status from master 1 fatal: [k8s-worker03-dc1]: FAILED! => { "changed": false, "elapsed": 300, "invocation": { "module_args": { "active_connection_states": [ "ESTABLISHED", "FIN_WAIT1", "FIN_WAIT2", "SYN_RECV", "SYN_SENT", "TIME_WAIT" ], "connect_timeout": 5, "delay": 0, "exclude_hosts": null, "host": "127.0.0.1", "msg": null, "path": "/etc/cni/net.d/calico-kubeconfig", "port": null, "search_regex": null, "sleep": 1, "state": "started", "timeout": 300 } }, "msg": "Timeout when waiting for file /etc/cni/net.d/calico-kubeconfig" }

NO MORE HOSTS LEFT *****

PLAY RECAP ***** k8s-master01-dc1 : ok=612 changed=81 unreachable=0 failed=0 skipped=728 rescued=0 ignored=6
k8s-master02-dc1 : ok=552 changed=76 unreachable=0 failed=0 skipped=639 rescued=0 ignored=3
k8s-master03-dc1 : ok=554 changed=77 unreachable=0 failed=0 skipped=637 rescued=0 ignored=3
k8s-worker01-dc1 : ok=378 changed=39 unreachable=0 failed=1 skipped=505 rescued=0 ignored=1
k8s-worker02-dc1 : ok=378 changed=39 unreachable=0 failed=1 skipped=501 rescued=0 ignored=1
k8s-worker03-dc1 : ok=378 changed=39 unreachable=0 failed=1 skipped=501 rescued=0 ignored=1

Thursday 21 March 2024 02:37:21 +0900 (0:05:01.569) 0:41:28.414 **** =============================================================================== network_plugin/calico : Wait for calico kubeconfig to be created ------------------------------------------------------------------------------------------ 301.57s /root/kubespray/roles/network_plugin/calico/tasks/install.yml:460 ------------------------------------------------------------------------------------------------- download : Download_container | Download image if required ------------------------------------------------------------------------------------------------- 99.77s /root/kubespray/roles/download/tasks/download_container.yml:57 ---------------------------------------------------------------------------------------------------- download : Download_container | Download image if required ------------------------------------------------------------------------------------------------- 65.26s /root/kubespray/roles/download/tasks/download_container.yml:57 ---------------------------------------------------------------------------------------------------- container-engine/runc : Download_file | Download item ------------------------------------------------------------------------------------------------------ 40.88s /root/kubespray/roles/download/tasks/download_file.yml:58 --------------------------------------------------------------------------------------------------------- container-engine/validate-container-engine : Populate service facts ---------------------------------------------------------------------------------------- 40.74s /root/kubespray/roles/container-engine/validate-container-engine/tasks/main.yml:25 -------------------------------------------------------------------------------- container-engine/containerd : Download_file | Download item ------------------------------------------------------------------------------------------------ 40.41s /root/kubespray/roles/download/tasks/download_file.yml:58 --------------------------------------------------------------------------------------------------------- container-engine/crictl : Download_file | Download item ---------------------------------------------------------------------------------------------------- 39.87s /root/kubespray/roles/download/tasks/download_file.yml:58 --------------------------------------------------------------------------------------------------------- container-engine/nerdctl : Download_file | Download item --------------------------------------------------------------------------------------------------- 39.22s /root/kubespray/roles/download/tasks/download_file.yml:58 --------------------------------------------------------------------------------------------------------- etcd : Gen_certs | Write etcd member/admin and kube_control_plane client certs to other etcd nodes --------------------------------------------------------- 31.45s /root/kubespray/roles/etcd/tasks/gen_certs_script.yml:87 ---------------------------------------------------------------------------------------------------------- download : Download_container | Download image if required ------------------------------------------------------------------------------------------------- 30.61s /root/kubespray/roles/download/tasks/download_container.yml:57 ---------------------------------------------------------------------------------------------------- kubernetes/control-plane : Kubeadm | Initialize first master ----------------------------------------------------------------------------------------------- 29.80s /root/kubespray/roles/kubernetes/control-plane/tasks/kubeadm-setup.yml:192 ---------------------------------------------------------------------------------------- container-engine/crictl : Extract_file | Unpacking archive ------------------------------------------------------------------------------------------------- 29.68s /root/kubespray/roles/download/tasks/extract_file.yml:2 ----------------------------------------------------------------------------------------------------------- download : Download_container | Download image if required ------------------------------------------------------------------------------------------------- 27.83s /root/kubespray/roles/download/tasks/download_container.yml:57 ---------------------------------------------------------------------------------------------------- container-engine/nerdctl : Extract_file | Unpacking archive ------------------------------------------------------------------------------------------------ 27.68s /root/kubespray/roles/download/tasks/extract_file.yml:2 ----------------------------------------------------------------------------------------------------------- etcd : Reload etcd ----------------------------------------------------------------------------------------------------------------------------------------- 21.69s /root/kubespray/roles/etcd/handlers/main.yml:12 ------------------------------------------------------------------------------------------------------------------- download : Download_container | Download image if required ------------------------------------------------------------------------------------------------- 21.04s /root/kubespray/roles/download/tasks/download_container.yml:57 ---------------------------------------------------------------------------------------------------- download : Download_container | Download image if required ------------------------------------------------------------------------------------------------- 19.98s /root/kubespray/roles/download/tasks/download_container.yml:57 ---------------------------------------------------------------------------------------------------- kubernetes/control-plane : Joining control plane node to the cluster. -------------------------------------------------------------------------------------- 19.23s /root/kubespray/roles/kubernetes/control-plane/tasks/kubeadm-secondary.yml:66 ------------------------------------------------------------------------------------- download : Download_file | Download item ------------------------------------------------------------------------------------------------------------------- 17.95s /root/kubespray/roles/download/tasks/download_file.yml:58 --------------------------------------------------------------------------------------------------------- etcdctl_etcdutl : Download_file | Download item ------------------------------------------------------------------------------------------------------------ 17.41s /root/kubespray/roles/download/tasks/download_file.yml:58 ---------------------------------------------------------------------------------------------------------

dong8650 commented 3 months ago

root@test01-dc1:~# uname -a Linux test01-dc1 5.15.0-1048-raspi https://github.com/kubernetes-sigs/kubespray/issues/51-Ubuntu SMP PREEMPT Thu Feb 22 10:30:12 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux

root@test01-dc1:~# cat /etc/issue Ubuntu 22.04.4 LTS \n \l

root@test01-dc1:~/kubespray/inventory/mycluster# root@test01-dc1:~/kubespray/inventory/mycluster# vi inventory.ini

[all] k8s-master01-dc1 ansible_host=100.100.100.201 ip=100.100.100.201 k8s-master02-dc1 ansible_host=100.100.100.202 ip=100.100.100.202 k8s-master03-dc1 ansible_host=100.100.100.203 ip=100.100.100.203 k8s-worker01-dc1 ansible_host=100.100.100.211 ip=100.100.100.211 k8s-worker02-dc1 ansible_host=100.100.100.212 ip=100.100.100.212 k8s-worker03-dc1 ansible_host=100.100.100.213 ip=100.100.100.213

[kube_control_plane] k8s-master01-dc1 k8s-master02-dc1 k8s-master03-dc1

[etcd] k8s-master01-dc1 k8s-master02-dc1 k8s-master03-dc1

[kube_node] k8s-worker01-dc1 k8s-worker02-dc1 k8s-worker03-dc1

[calico_rr]

[k8s_cluster:children] kube_control_plane kube_node calico_rr

dong8650 commented 3 months ago

ansible-playbook -v -i inventory/mycluster/inventory.ini --become --become-user=root cluster.yml -vvv