The easiest way to bootstrap a self-hosted High Availability Kubernetes cluster. A fully automated HA k3s etcd install with kube-vip, MetalLB, and more. Build. Destroy. Repeat.
When running the playbook, it hangs at the k3s_agent : Enable and check K3s service task.
I tried a bunch of times to reset and re-run but without success.
I checked the discussion here but it did not fix the issue I have.
if I use my master node ip address like suggested by FrostyFitz in the discussion it will work but if I put another address for the endpoint it does not work.
It's almost as if the vip address is never created. It does not respond to ping.
I've check all the nodes and I have eth0 everywhere.
Also my token is correct
I tried to use either same network for the virtual ip and ip range (10.193.1.1/24) or a different network (10.193.20.1/24) to have more ips but the result is the same.
Steps to Reproduce
Clone the project
Update variables
Run
Context (variables)
Operating system:
Ubuntu 22.04
Hardware:
Running 5 VM on Proxmox. All of them were created using terraform and a cloud-init template.
Expected Behavior
Proceed with the playbook
Current Behavior
When running the playbook, it hangs at the
k3s_agent : Enable and check K3s service
task.I tried a bunch of times to
reset
and re-run but without success.I checked the discussion here but it did not fix the issue I have.
if I use my master node ip address like suggested by FrostyFitz in the discussion it will work but if I put another address for the endpoint it does not work. It's almost as if the vip address is never created. It does not respond to ping. I've check all the nodes and I have eth0 everywhere. Also my token is correct
I tried to use either same network for the virtual ip and ip range (10.193.1.1/24) or a different network (10.193.20.1/24) to have more ips but the result is the same.
Steps to Reproduce
Context (variables)
Operating system: Ubuntu 22.04
Hardware: Running 5 VM on Proxmox. All of them were created using terraform and a cloud-init template.
Variables Used
all.yml
Hosts
host.ini
Logs
On the master node
On the worker node
Possible Solution