Closed lilHermit closed 1 month ago
Is it kubespray release 2.23?
I use the latest kubespray code and could not reproduce it.
certSANs:
- "kubernetes"
- "kubernetes.default"
- "kubernetes.default.svc"
- "kubernetes.default.svc.cluster.local"
- "10.233.0.1"
- "localhost"
- "127.0.0.1"
- "node1"
- "lb-apiserver.kubernetes.local"
- "10.6.88.1"
- "node1.cluster.local"
hello
Facing similar issue, Trying to upgrade existing cluster kubespray version: v2.24.1
Why I am getting k8s_cluster appended? From which variable kubespray taking "k8s_cluster" value?
192.168.1.141 kube-master.k8s_cluster kube-master 192.168.1.160 kube-worker-1.k8s_cluster kube-worker-1 192.168.1.134 kube-worker-2.k8s_cluster kube-worker-2
Also in kubeadm-config.conf
k8s it should be some . rather
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What happened?
When deploying to a single node cluster (as a test) it errors with
When ssh'ing into the node and checking
/etc/kubernetes/kubeadm-config.yaml
the certSAN has an invalid RFC entry as belowAs you can see the final element includes a comma which isn't anywhere in my config. If I remove this and then rerun the following on the node it succeeds. However obviously kubespray as errored
What did you expect to happen?
No errors and a healthy cluster
How can we reproduce it (as minimally and precisely as possible)?
Copy the sample inventory and use the following
inventory.ini
This is using the docker image
quay.io/kubespray/kubespray:v2.23.3
and alsoquay.io/kubespray/kubespray:v2.24.1
OS
Running kubespray via docker on Ubuntu to a node running on a raspberry-pi 4 (also ubuntu)
Version of Ansible
ansible [core 2.14.6] config file = /kubespray/ansible.cfg configured module search path = ['/kubespray/library'] ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (/usr/bin/python3) jinja version = 3.1.2 libyaml = True
Version of Python
Python 3.10.12
Version of Kubespray (commit)
quay.io/kubespray/kubespray:v2.23.3 (docker tag)
Network plugin used
calico
Full inventory with variables
See above
inventry.ini
nothing specialCommand used to invoke ansible
ansible-playbook -i inventory/pi-cluster2/inventory.ini --become --become-user=root cluster.yml
Output of ansible run
https://pastebin.com/vh06yfgc
Anything else we need to know
The nodes are arm64 but doubt that's the issue