Open lenglet-k opened 5 months ago
+1 same issue
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
What happened?
I have run cluster.yaml with only one control plane. This control plane it's works. Few moments later, i add two control plane in my inventory and i run cluster.yaml playbook. The add task of my two new control planes failed on this tasks:
And before i see that this task is skipped, but it's this task which register
kubeconfig_file_discovery
variable :I don't understand for why this task is skipped, because kubeadm_use_file_discovery is defined to true. Maybe it's caused by this when condition:
kubeadm_already_run is not defined or not kubeadm_already_run.stat.exists
What did you expect to happen?
My two new control plane must be installed
How can we reproduce it (as minimally and precisely as possible)?
First: Init cluster with one control plane After: add two node and run cluster.yml
OS
Rocky 8.9
Version of Ansible
Version of Python
Python 3.11.5
Version of Kubespray (commit)
743bcea
Network plugin used
calico
Full inventory with variables
Command used to invoke ansible
Output of ansible run
Anything else we need to know
I add my debug test: