equinix-labs / terraform-equinix-metal-k3s

Manage K3s (k3s.io) region clusters on Equinix Metal
https://registry.terraform.io/modules/equinix/k3s/metal/latest?tab=readme
Apache License 2.0
46 stars 15 forks source link
anycast baremetal bgp k3s kubernetes rancher terraform

K3s on Equinix Metal

GitHub release Equinix Community

Table of content

Table of content

* [Introduction](#introduction) * [Prerequisites](#prerequisites) * [Variable requirements](#variable-requirements) * [Demo application](#demo-application) * [Notes](#notes) * [Example scenarios](#example-scenarios) * [Single node in default Metro](#single-node-in-default-metros) * [Single node in 2 different Metros](#single-node-in-2-different-metros) * [1 x HA cluster with 3 nodes & 4 public IPs + 2 x Single Node cluster (same Metro), a Global IPV4 and the demo app deployed](#1-x-ha-cluster-with-3-nodes--4-public-ips--2-x-single-node-cluster-same-metro-a-global-ipv4-and-the-demo-app-deployed) * [Usage](#usage) * [Accessing the clusters](#accessing-the-clusters) * [Terraform module documentation](#terraform-module-documentation) * [Requirements](#requirements-1) * [Providers](#providers) * [Modules](#modules) * [Resources](#resources) * [Inputs](#inputs) * [Outputs](#outputs) * [Contributing](#contributing) * [License](#license)

Introduction

This is a Terraform project for deploying K3s on Equinix Metal intended to allow you to quickly spin-up and down K3s clusters.

K3s is a fully compliant and lightweight Kubernetes distribution focused on Edge, IoT, ARM or just for situations where a PhD in K8s clusterology is infeasible.

:warning: This repository is Experimental meaning that it's based on untested ideas or techniques and not yet established or finalized or involves a radically new and innovative style! This means that support is best effort (at best!) and we strongly encourage you to NOT use this in production.

This terraform project supports a wide variety of scenarios and mostly focused on Edge, such as:

More on that later.

Prerequisites

:warning: Before creating the assets, verify there is enough amount of servers in the chosen Metros by visiting the Capacity Dashboard. See more about the inventory and capacity in the official documentation

Variable requirements

There is a lot of flexibility in the module to allow customization of the different scenarios. There can be as many cluster with different topologies as wanted but mainly, as defined in examples/demo_cluster:

Name Description Type Default Required
metal_auth_token Your Equinix Metal API key string n/a yes
metal_project_id Your Equinix Metal Project ID string n/a yes
clusters K3s cluster definition list of K3s cluster objects n/a yes

:note: The Equinix Metal Auth Token should be defined in a provider block in your own Terraform config. In this project, that is done in examples/demo_cluster/, not in the root. This pattern facilitates Implicit Provider Inheritance and better reuse of Terraform modules.

For more details on the variables, see the Terraform module documentation section.

The default variables are set to deploy a single node K3s cluster in the FR Metro, using a Equinix Metal's c3.small.x86. You just need to add the cluster name as:

metal_auth_token = "redacted"
metal_project_id = "redacted"
clusters         = [
  {
    name = "FR DEV Cluster"
  }
]

Change each default variable at your own risk, see Example scenarios and the K3s module README.md file for more details.

:warning: The hostnames are created based on the Cluster Name and the control_plane_hostnames & node_hostnames variables (normalized), beware the lenght of those variables.

You can create a terraform.tfvars file with the appropiate content or use the TF_VAR_ environment variables.

:warning: The only OS that has been tested is Debian 11.

Demo application

If enabled (deploy_demo = true), a demo application (hello-kubernetes) will be deployed on all the clusters. The Global IPv4 will be used by the K3s Traefik Ingress Controller to expose that application and the load will be spreaded among all the clusters. This means that different requests will be routed to different clusters. See the MetalLB documentation for more information about how BGP load balancing works.

Example scenarios

Single node in default Metro

metal_auth_token = "redacted"
metal_project_id = "redacted"
clusters         = [
  {
    name = "FR DEV Cluster"
  }
]

This will produce something similar to:

Outputs:

k3s_api = {
  "FR DEV Cluster" = "145.40.94.83"
}

Single node in 2 different Metros

metal_auth_token = "redacted"
metal_project_id = "redacted"
clusters         = [
  {
    name = "FR DEV Cluster"
  },
  {
    name = "SV DEV Cluster"
    metro = "SV"
  }
]

This will produce something similar to:

Outputs:

k3s_api = {
  "FR DEV Cluster" = "145.40.94.83",
  "SV DEV Cluster" = "86.109.11.205"
}

1 x HA cluster with 3 nodes & 4 public IPs + 2 x Single Node cluster (same Metro), a Global IPV4 and the demo app deployed

metal_auth_token = "redacted"
metal_project_id = "redacted"
clusters = [{
  name = "SV Production"
  ip_pool_count = 4
  k3s_ha = true
  metro = "SV"
  node_count = 3
},
{
  name = "FR Dev 1"
  metro = "FR"
},
{
  name = "FR Dev 2"
  metro = "FR"
}
]

global_ip        = true
deploy_demo      = true

This will produce something similar to:

Outputs:

anycast_ip = "147.75.40.52"
demo_url   = "http://hellok3s.147.75.40.52.sslip.io"
k3s_api = {
  "FR Dev 1" = "145.40.94.83",
  "FR Dev 2" = "147.75.192.250",
  "SV Production" = "86.109.11.205"
}

Usage

git clone https://github.com/equinix-labs/terraform-equinix-metal-k3s.git
cd terraform-equinix-metal-k3s/examples/demo_cluster
terraform init -upgrade
terraform plan -var-file="foobar.tfvars"
terraform apply -var-file="foobar.tfvars" --auto-approve

The output will show the required IPs or hostnames to use the clusters:

...
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Outputs:

k3s_api = {
  "FR example" = "145.40.94.83"
}

Accessing the clusters

As the SSH key for the project has been injected, the clusters can be accessed as:

(
MODULENAME="demo_cluster"
IFS=$'\n'
for cluster in $(terraform output -json | jq -r ".${MODULENAME}.value.k3s_api | keys[]"); do
  IP=$(terraform output -json | jq -r ".${MODULENAME}.value.k3s_api[\"${cluster}\"]")
  ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@${IP} kubectl get nodes
done
)

NAME         STATUS   ROLES                  AGE   VERSION
ny-k3s-aio   Ready    control-plane,master   9m35s v1.26.5+k3s1
NAME         STATUS   ROLES                  AGE   VERSION
sv-k3s-aio   Ready    control-plane,master   10m   v1.26.5+k3s

To access from outside, the K3s kubeconfig file can be copied to any host and replace the server field with the IP of the K3s API:

(
MODULENAME="demo_cluster"
IFS=$'\n'
for cluster in $(terraform output -json | jq -r ".${MODULENAME}.value.k3s_api | keys[]"); do
  IP=$(terraform output -json | jq -r ".${MODULENAME}.value.k3s_api[\"${cluster}\"]")
  export KUBECONFIG="./$(echo ${cluster}| tr -c -s '[:alnum:]' '-')-kubeconfig"
  scp -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@${IP}:/etc/rancher/k3s/k3s.yaml ${KUBECONFIG}
  sed -i "s/127.0.0.1/${IP}/g" ${KUBECONFIG}
  chmod 600 ${KUBECONFIG}
  kubectl get nodes
done
)

NAME         STATUS   ROLES                  AGE     VERSION
ny-k3s-aio   Ready    control-plane,master   8m41s   v1.26.5+k3s1
NAME         STATUS   ROLES                  AGE     VERSION
sv-k3s-aio   Ready    control-plane,master   9m20s   v1.26.5+k3s1

:warning: OSX sed is different, it needs to be used as sed -i "" "s/127.0.0.1/${IP}/g" ${KUBECONFIG} instead.

(
MODULENAME="demo_cluster"
IFS=$'\n'
for cluster in $(terraform output -json | jq -r ".${MODULENAME}.value.k3s_api | keys[]"); do
  IP=$(terraform output -json | jq -r ".${MODULENAME}.value.k3s_api[\"${cluster}\"]")
  export KUBECONFIG="./$(echo ${cluster}| tr -c -s '[:alnum:]' '-')-kubeconfig"
  scp -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@${IP}:/etc/rancher/k3s/k3s.yaml ${KUBECONFIG}
  sed -i "" "s/127.0.0.1/${IP}/g" ${KUBECONFIG}
  chmod 600 ${KUBECONFIG}
  kubectl get nodes
done
)

NAME         STATUS   ROLES                  AGE     VERSION
ny-k3s-aio   Ready    control-plane,master   8m41s   v1.26.5+k3s1
NAME         STATUS   ROLES                  AGE     VERSION
sv-k3s-aio   Ready    control-plane,master   9m20s   v1.26.5+k3s1

Terraform module documentation

Requirements

Name Version
terraform >= 1.3
equinix >= 1.14.2

Providers

Name Version
equinix >= 1.14.2

Modules

Name Source Version
k3s_cluster ./modules/k3s_cluster n/a

Resources

Name Type
equinix_metal_reserved_ip_block.global_ip resource

Inputs

Name Description Type Default Required
metal_project_id Equinix Metal Project ID string n/a yes
clusters K3s cluster definition
list(object({
name = optional(string, "K3s demo cluster")
metro = optional(string, "FR")
plan_control_plane = optional(string, "c3.small.x86")
plan_node = optional(string, "c3.small.x86")
node_count = optional(number, 0)
k3s_ha = optional(bool, false)
os = optional(string, "debian_11")
control_plane_hostnames = optional(string, "k3s-cp")
node_hostnames = optional(string, "k3s-node")
custom_k3s_token = optional(string, "")
ip_pool_count = optional(number, 0)
k3s_version = optional(string, "")
metallb_version = optional(string, "")
}))
[
{}
]
no
deploy_demo Deploys a simple demo using a global IP as ingress and a hello-kubernetes pods bool false no
global_ip Enables a global anycast IPv4 that will be shared for all clusters in all metros bool false no

Outputs

Name Description
anycast_ip Global IP shared across Metros
demo_url URL of the demo application to demonstrate a global IP shared across Metros
k3s_api List of Clusters => K3s APIs

Contributing

If you would like to contribute to this module, see CONTRIBUTING page.

License

Apache License, Version 2.0. See LICENSE.