This repository contains the configuration for several services I run at home.
This is not intended to be re-usable on the spot, if you want to run this yourself, you will need to make several adjustements (e.g. override secrets). The code minimal and straight forward, so adjustments should be straight forward. This is shared for transparency and inspiration.
$service.heinrichhartmann.net
on tailscale VPN.$service.lan.heinrichhartmann.net
inside home network.docker-compose.yaml
and a simple Makefile
This setup is designed to work in different environments including an dedicated server machine, a RPI and Linux Desktop environments. The main server is running NixOS, with a config managed under /nixos.
Requirements:
/svc
.make
, git
, curl
) must be installed as well as docker-compose
and bindfs
. See make install-deps
for more details.Secrets are managed inside this repository using git-crypt.
Files ending with .crypt
are encrpyted inside the git repostiry, and only readable if your host is trusted.
The trust model works as follows:
See /crypt/README.md for more details.
Open ends
The configuration now relies on AWS services for DNS and PKI.
All required AWS configuration is managed using terraform located in /infra/aws
.
*.heinrichhartmann.net
point to the "svc" node inside the tailscale network.*.lan.heinrichhartmann.net
point to the "svc" node inside the local netowrk.$host.ts.heinrichhartmann.net
(requires manual updates)Learnings. Earlier version of this repository had self-hosted DHCP, DNS and PKI (Private Key Infrastructure for https certificates) included in the config. This has the obvious drawback that all clients have to install a self-signed certificate. But even once this is done, there are more difficulties caused by various clients decided to ignore the DHCP provided DNS services and/or ignore the root certificates installed by the OS. Repeated offenders were Firefox, Firefox on Android, Safari on iPhone.
At some point I stopped trying and accepted the fact, that I will be using an external service for DNS and PKI.
Certificates are generated via letsencrypt
and use DNS authentification faciliated by AWS.
Generated certificates are stored under /svc/var
.
Reneawal is performed using make certs
from /svc
.
We use Traefik[https://traefik.io/] as ingress proxy.
This tool terminates HTTPS, and routes HTTP requests to the appropriate backend. Service discovery is dynamic, and configured using labels associated to docker containers. It also allows to configure HTTP Basic Auth for services by adding a label.
Example:
labels:
- "traefik.enable=true"
- "traefik.http.routers.books.rule=HostRegexp(`books.{domain:.*}`)"
- "traefik.http.routers.books.entrypoints=https"
- "traefik.http.routers.books.tls=true"
Learnings. Prior iterations used Nginx and HAProxy as for routing requests. I found these solutions harder to maintain, as they required to keep the config files in sync, and the syntax (in particular of HAProxy) was hard to manage. Traefik offers a good out-of-the box experiences for the standard use-cases. Debugging of the docker labels is sometimes a little bit tedious, as there is no linting or syntax checking.
Open ends.
Service configurations are stored in ./services/$name
they typically consists of two files:
docker-compose.yaml
containing the actual service configurationMakefile
exposing targets start
, stop
, test
.Services can be selectively enabled/disabled using the ./svc.sh
tool.
Only enabled services are started on make start
and on boot.
./svc.sh new $name # create new service scaffolding from template
./svc.sh enable $name # enable service with given name
./svc.sh list-available # list available services
./svc.sh list # list enabled services
make start # start all enabled services
make stop # stop all enabled services
make test # test status of all services and print results
The main server where this configuration is running is equipped with two 8TB HDD drives. Those are configured as a ZFS pool with a RAID 0 configuration, allowing us to compensate for the loss of one of the disks.
We use zfs-autosnapshot to protect against accidental deletion. Off-site backup is realized via restic to backblaze for selected datasets.
There are 3 main filesystems on the pool, that differ in backup and replication strategy.
/share/shelf
. Working data-set that is intended to be replicated to all working machines.
Data in shelf is snapshotted and backed-up. Contents are mainly documents that are work in progress./share/attic
. Data in the attic is snapshotted and backed-up.
Content includes archived projects, private photo collection, important media./share/garage
. Data in garage is snapshotted but not backed-up.
Here goes the long-tail of less valuable data I wound not mind loosing.The naming of the datasets is reflecting the different storage tiers, that I use for personal stuff:
Open Ends