The easiest way to bootstrap a self-hosted High Availability Kubernetes cluster. A fully automated HA k3s etcd install with kube-vip, MetalLB, and more. Build. Destroy. Repeat.
If using LXC containers on Proxmox with a ZFS pool, the overlay driver will cause k3s to not start which will cause failure to deploy. This corrects that behavior by checking for /var/lib to be backed by zfs and changes the snapshot driver to native.
This is part of the lxc role, so this will not affect any bare metal deployments. This is intentional as I did not validate bare metal deployments.
Checklist
[x] Tested locally
[x] Ran site.yml playbook
[x] Ran reset.yml playbook
[x] Did not add any unnecessary changes
[x] Ran pre-commit install at least once before committing
Dont merge this. I cant be sure if this is a edge case or not in my scenario now. If we have examples that this is more then an edge case then we have a fix, otherwise it will be relegated to the history of PRs.
Proposed Changes
Fixes https://github.com/techno-tim/k3s-ansible/issues/230
If using LXC containers on Proxmox with a ZFS pool, the overlay driver will cause k3s to not start which will cause failure to deploy. This corrects that behavior by checking for /var/lib to be backed by zfs and changes the snapshot driver to native.
This is part of the lxc role, so this will not affect any bare metal deployments. This is intentional as I did not validate bare metal deployments.
Checklist
site.yml
playbookreset.yml
playbook