This is a repository for my home infrastructure and Kubernetes cluster. I try to adhere to Infrastructure as Code (IaC) and GitOps practices using tools like Terraform, Kubernetes, Flux, Renovate and GitHub Actions.
There is a template over at onedr0p/flux-cluster-template if you wanted to try and follow along with some of the practices I use here.
This semi hyper-converged cluster runs Talos Linux, an immutable and ephemeral Linux distribution built for Kubernetes, deployed on bare-metal Intel NUCs. Rook then provides my workloads with persistent block, object, and file storage; while a seperate server provides file storage for my media.
🔸 Click here to see my Talos configuration.
Flux watches the clusters in my kubernetes folder (see Directories below) and makes the changes to my clusters based on the state of my Git repository.
The way Flux works for me here is it will recursively search the k8s/clusters/${cluster}
folder until it finds the most top level kustomization.yaml
per directory and then apply all the resources listed in it. That aforementioned kustomization.yaml
will generally only have a namespace resource and one or many Flux kustomizations. Those Flux kustomizations will generally have a HelmRelease
or other resources related to the application underneath it which will be applied.
Renovate watches my entire repository looking for dependency updates, when they are found a PR is automatically created. When some PRs are merged Flux applies the changes to my cluster.
This Git repository contains the following directories under Kubernetes.
📁 k8s
├── 📁 cluster/cluster-0 # main cluster
│ ├── 📁 crds # custom resources
│ ├── 📁 flux # core flux configuration
│ └── 📁 manifests # applications
└── 📁 global/flux # shared resources
├── 📁 repositories # helm and git repositories
├── 📁 vars # common variables
└── 📁 flux # core flux configuration
This is a high-level look how Flux deploys my applications with dependencies. Below there are 3 apps postgres
, authentik
and weave-gitops
. postgres
is the first app that needs to be running and healthy before authentik
and weave-gitops
. Once postgres
is healthy authentik
will be deployed and after that is healthy weave-gitops
will be deployed.
graph TD;
id1>Kustomization: cluster] -->|Creates| id2>Kustomization: cluster-apps];
id2>Kustomization: cluster-apps] -->|Creates| id3>Kustomization: postgres];
id2>Kustomization: cluster-apps] -->|Creates| id6>Kustomization: authentik]
id2>Kustomization: cluster-apps] -->|Creates| id8>Kustomization: weave-gitops]
id2>Kustomization: cluster-apps] -->|Creates| id5>Kustomization: postgres-cluster]
id3>Kustomization: postgres] -->|Creates| id4[HelmRelease: postgres];
id5>Kustomization: postgres-cluster] -->|Depends on| id3>Kustomization: postgres];
id5>Kustomization: postgres-cluster] -->|Creates| id10[Postgres Cluster];
id6>Kustomization: authentik] -->|Creates| id7(HelmRelease: authentik);
id6>Kustomization: authentik] -->|Depends on| id5>Kustomization: postgres-cluster];
id8>Kustomization: weave-gitops] -->|Creates| id9(HelmRelease: weave-gitops);
id8>Kustomization: weave-gitops] -->|Depends on| id5>Kustomization: postgres-cluster];
id9(HelmRelease: weave-gitops) -->|Depends on| id7(HelmRelease: authentik);
While most of my infrastructure and workloads are self-hosted I do rely upon the cloud for certain key parts of my setup. This saves me from having to worry about two things. (1) Dealing with chicken/egg scenarios and (2) services I critically need whether my cluster is online or not.
The alternative solution to these two problems would be to host a Kubernetes cluster in the cloud and deploy applications like Vaultwardenand Uptime Kuma. However, maintaining another cluster and monitoring another group of workloads is a lot more time and effort than I am willing to put in.
Service | Use | Cost |
---|---|---|
Cloudflare | Domain and S3 | ~$30/yr |
GitHub | Hosting this repository and continuous integration/deployments | Free |
NextDNS | My router DNS server which includes AdBlocking | ~$20/yr |
Fly.io | I have two small machines running here which host my password manager and Uptime Kuma | Free (total spend is below $5) |
Device | Count | OS Disk Size | Data Disk Size | Ram | Operating System | Purpose |
---|---|---|---|---|---|---|
Intel NUC8i3BEH | 1 | 500GB SSD | 500GB SSD NVMe (rook-ceph) | 64GB | Talos OS | Control-plane/Worker |
Intel NUC8i5BEH | 2 | 500GB SSD | 500GB SSD NVMe (rook-ceph) | 64GB | Talos OS | Control-plane/Workers |
Jonsbo N3 custom build | 1 | 256GB SSD | 4x18TB ZFS Mirror (tank) | 32GB | NixOS | NFS + Backup Server |
PiKVM (Arch) | Network KVM | |||||
TESmart 8 Port KVM Switch | 1 | - | - | - | - | Network KVM (PiKVM) |
UniFi UDM-Pro-SE | 1 | - | - | - | - | Routing/Firewall/IPS/DNS |
UniFi USW-Pro-Max-24-PoE | 1 | - | - | - | - | Core Switch |
USW-Enterprise-8-PoE | 1 | - | - | - | - | Attic |
APC SMT2200RM2U w/ NIC | 1 | - | - | - | - | UPS |
Thanks to all the people who donate their time to the Home Operations Discord community. Be sure to check out kubesearch.dev for ideas on how to deploy applications or get ideas on what you may deploy.
See my awful commit history
See LICENSE