Closed lacrimal closed 1 year ago
We haven't seen leaks so my guess is that konnectivity isn't working properly due to the lack of a node balancer and that's causing konnectivity to open an unreasonable number of connections to kube-apiserver.
You can use the node local load balancer right in k0sctl: https://docs.k0sproject.io/head/nllb/#full-example-using-k0sctl
Also it's a very bad idea to have 4 controllers. Etcd should have an ODD number of replicas, etcd considers 3 or 5 optimal so you should have 3 nodes which are controller+worker and one node should be only worker.
If once you have the loadbalancer the problem persists I'll ask for a memory profile so that we can see what's going on internally.
Thank you for help, after creating cluster with odd number of controllers and lb it work just fine. Now i know why there is no lb in configuration of two nodes with one controller.
Hi, Have ambivalent feelings about k0s. Just now using four machines Ampere A1 (2CPU, 16GB ram) on OL8 ram was exhausted on 2 hosts just after 2h of cluster work without any additional deployment. Both hosts are NoSchedule, NoExecute and unreachable with last memory stats saying RAM was 97% full. Now, after restarting this hosts, there is not much more time, and one of two healthy hosts will die for the same reason and forced reboot is needed. I takes about 2h after fresh install of cluster to exhaust memory on random node making it unreachable. Every time it looks like this just before machine is unreachable:
other node:
cluster is created with this k0sctl:
Also created cluster of three x86 machines with one controller and can't observe this behaviour. So is my configuration too exotic? Please advise if I can deliver more detail to diagnose it - im fresh in k0s.