Closed mosgjig closed 1 year ago
Closing issue... went in and purged microk8s from each server and re-ran the role with result:
ansible cluster -i inventories/microk8s/hosts -m shell -a "microk8s status" --limit pimaster
pi | CHANGED | rc=0 >>
microk8s is running
high-availability: yes
datastore master nodes: 192.168.1.226:19001 192.168.1.227:19001 192.168.1.229:19001
datastore standby nodes: none
addons:
enabled:
dashboard # (core) The Kubernetes dashboard
dns # (core) CoreDNS
ha-cluster # (core) Configure high availability on the current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
host-access # (core) Allow Pods connecting to Host services smoothly
hostpath-storage # (core) Storage class; allocates storage from host directory
ingress # (core) Ingress controller for external access
metrics-server # (core) K8s Metrics Server for API access to service metrics
rbac # (core) Role-Based Access Control for authorisation
registry # (core) Private image registry exposed on localhost:32000
storage # (core) Alias to hostpath-storage add-on, deprecated
disabled:
cert-manager # (core) Cloud native certificate management
community # (core) The community addons repository
kube-ovn # (core) An advanced network fabric for Kubernetes
mayastor # (core) OpenEBS MayaStor
metallb # (core) Loadbalancer for your Kubernetes cluster
minio # (core) MinIO object storage
observability # (core) A lightweight observability stack for logs, traces and metrics
prometheus # (core) Prometheus operator for monitoring and logging
Attempting to create a cluster with 3 raspberry Pi's.
The playbook to run the role:
Logs don't indicate any issues when adding nodes to master. Performing a check on master doesn't show HA:
Ran the role multiple times to no avail... output from last run:
Anything missing?