techno-tim / k3s-ansible

The easiest way to bootstrap a self-hosted High Availability Kubernetes cluster. A fully automated HA k3s etcd install with kube-vip, MetalLB, and more. Build. Destroy. Repeat.
https://technotim.live/posts/k3s-etcd-ansible/
Apache License 2.0
2.41k stars 1.05k forks source link

kube-vip service load balancer #127

Closed sharifm-informatica closed 2 years ago

sharifm-informatica commented 2 years ago

Proposed Changes

The main motive for this is metallb install was always failing in upgrades and reruns of the site.yml playbook. often upgrade in the scrip or k3s require a complete reset.yml and reinstall of the whole cluster to avoid bugs and failures in metallb. The ansible playbook was not very idempotent. This possibly solves #126.

Another advantage of this is to save resources mostly RAM and CPU and L2 arp traffic coming from both kube-vip for the control plane and metallb for the service loadbalancer.

Currently the kube-vip ip range takes its value from the same 'metal_lb_ip_range' variable to maintain backward compatibility with existing installations. In our tests this version does not break and existing install. Thanks for the awesome work.

Checklist

sharifm-informatica commented 2 years ago

molecule tests have not been updated for this change. it might fail metallb tests

timothystewart6 commented 2 years ago

Hey, thanks! I thought you saw this? https://github.com/techno-tim/k3s-ansible/issues/116 We'll consider it in the future but for now I am going to stick with metal-lb! Thank you!

timothystewart6 commented 2 years ago

Also, even if we were to switch we'd expect the tests to be changed and passed too.

clibequilibrium commented 2 years ago

Proposed Changes

  • Removed metallb from playbook
  • Enabled kube-vip service loadbalancer functionality
  • added kube-vip local cloud cloud provider for ip distribution on prem.

The main motive for this is metallb install was always failing in upgrades and reruns of the site.yml playbook. often upgrade in the scrip or k3s require a complete reset.yml and reinstall of the whole cluster to avoid bugs and failures in metallb. The ansible playbook was not very idempotent. This possibly solves #126.

Another advantage of this is to save resources mostly RAM and CPU and L2 arp traffic coming from both kube-vip for the control plane and metallb for the service loadbalancer.

Currently the kube-vip ip range takes its value from the same 'metal_lb_ip_range' variable to maintain backward compatibility with existing installations. In our tests this version does not break and existing install. Thanks for the awesome work.

Checklist

  • [x] Tested locally
  • [x] Ran site.yml playbook
  • [x] Ran reset.yml playbook
  • [x] Did not add any unnecessary changes
  • [x] 🚀

Thanks for the PR