Closed timothystewart6 closed 2 years ago
This is a setup I have been using with this playbook for a while. To make sure that this still works, I have opened #78. But the tests are looking good over there also - so what remains to be done here?
https://github.com/techno-tim/k3s-ansible/pull/78 was merged. I will test on my single machine soon. I think they last time I tested it failed however now we have tests to prove it should work!
@sleiner CI flakiness now on single node? https://github.com/techno-tim/k3s-ansible/runs/8274814832?check_suite_focus=true
CI flakiness now on single node? https://github.com/techno-tim/k3s-ansible/runs/8274814832?check_suite_focus=true
seems like it. Not sure how this came to be (I have no idea how the cert names are derived)... Let's see if that happens again?
I did test and while it does technically work with a single node, it doesn't give that node the worker
role.
Tested with
[master]
192.168.30.38
[k3s_cluster:children]
master
Wondering if there should either be a flag to give all nodes all roles or support adding the same IP to the hosts (isn't intuitive and could cause problems), or we just add the worker
role at the end in a post step if there is only 1 IP in hosts.
Open to ideas!
it doesn't give that node the
worker
role.
I have to admit that my knowledge is a bit shaky here. Would that be expected/necessary?
The tests show that you can deploy and use the example nginx workload just you would with a HA cluster.
Good point. I thought it was necessary. I'll test some more. I also need to learn more about the molecule overrides so I can use the same host settings
I think this is working because we are not applying the taints
➜ personal k get nodes
NAME STATUS ROLES AGE VERSION
k3s-01 NotReady control-plane,etcd,master 5d6h v1.24.4+k3s1
k3s-02 Ready control-plane,etcd,master 5d6h v1.24.4+k3s1
k3s-03 Ready control-plane,etcd,master 5d6h v1.24.4+k3s1
k3s-04 Ready <none> 5d6h v1.24.4+k3s1
k3s-05 Ready <none> 5d6h v1.24.4+k3s1
➜ personal k describe node k3s-01
Name: k3s-01
Roles: control-plane,etcd,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=k3s
beta.kubernetes.io/os=linux
egress.k3s.io/cluster=true
kubernetes.io/arch=amd64
kubernetes.io/hostname=k3s-01
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=true
node-role.kubernetes.io/etcd=true
node-role.kubernetes.io/master=true
node.kubernetes.io/instance-type=k3s
Annotations: etcd.k3s.cattle.io/node-address: 192.168.30.38
etcd.k3s.cattle.io/node-name: k3s-01-3163605d
flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"2a:5e:af:0e:15:59"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.30.38
k3s.io/hostname: k3s-01
k3s.io/internal-ip: 192.168.30.38
k3s.io/node-args: ["server","--flannel-iface","eth0","--node-ip","192.168.30.38","--disable","servicelb","--disable","traefik"]
k3s.io/node-config-hash: asdasdasdasdasdasdasdas====
k3s.io/node-env: {"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/asdasdasdasdsa"}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 10 Sep 2022 14:16:51 -0500
Taints: node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unreachable:NoSchedule
Unschedulable: false
Lease:
This means that any workload can run on any node, regardless of role. This explains why single node works but I think it's a bug, not a feature 🤣
We should support single node installs, installing all roles and all items on one node. This gives someone the capability of running k3s one one node, with the flexibility of growing later.