russmedia / terraform-google-kubernetes-cluster

GKE Kubernetes cluster with node pool submodule
MIT License
13 stars 11 forks source link
gke kubernetes terraform

This Repo Is No Longer Maintained

Please consider migrating to official terraform-google-kubernetes-engine module: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine.

Overview

GKE Kubernetes module with node pools submodule

Kuberntes diagram on GKE

Table of contents

Requirements

Please use google provider version = "~> 3.14"

If you need more control with versioning of your cluster, it is advised to specify "min_master_version" and "version" in node-pools. Otherwise GKE will be using default version which might change in near future.

Compatibility

This module is meant for use with Terraform 0.12. If you haven't upgraded and need a Terraform 0.11.x-compatible version of this module, the last released version intended for Terraform 0.11.x is 3.0.0.

1. Features

2. Usage

cluster with default node pool on preemptible

module "primary-cluster" {
  name                   = terraform.workspace
  source                 = "russmedia/kubernetes-cluster/google"
  version                = "4.0.0"
  region                 = var.google_region
  zones                  = var.google_zones
  project                = var.project
  environment            = terraform.workspace 
  min_master_version     = var.master_version
}

cluster with explicit definition of node pools (optional)

module "primary-cluster" {
  name                   = "my-cluster"
  source                 = "russmedia/kubernetes-cluster/google"
  version                = "4.0.0"
  region                 = var.google_region
  zones                  = var.google_zones
  project                = var.project
  environment            = terraform.workspace
  min_master_version     = var.master_version
  node_pools             = var.node_pools
}

and in variables:

node_pools = [
  {
    name                = "default-pool"
    initial_node_count  = 1
    min_node_count      = 1
    max_node_count      = 1
    version             = "1.15.11-gke.3"
    image_type          = "COS"
    machine_type        = "n1-standard-1"
    preemptible         = true
    tags                = "tag1 nat"
  },
]

Note: at least one node pool must have initial_node_count > 0.

Since version 5.0.0 module supports no_schedule_taint and no_execute_taint - they will add schedulable=equals:NoSchedule or executable=equals:NoExecute - which will effect in only specific nodes being scheduled on those nodes. Please see k8s docs for more info.

Example usage with "NoSchedule":

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  tolerations:
    - key: "schedulable"
      operator: "Exists"
      effect: "NoSchedule"

Example usage with "NoExecute":

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  tolerations:
    - key: "executable"
      operator: "Exists"
      effect: "NoExecute"

Note - if node has both taints NoExecute and NoSchedule - you need to add both tolerations to pod to be allowed there.

multiple clusters

Due to current limitations with depends_on feature and modules it is advised to create vpc network separately and use it when defining modules, i.e:

resource "google_compute_network" "default" {
  name                    = terraform.workspace
  auto_create_subnetworks = "false"
  project                 = var.project
}
module "primary-cluster" {
  name        = "primary-cluster"
  source      = "russmedia/kubernetes-cluster/google"
  version     = "4.0.0"
  region      = var.google_region
  zones       = var.google_zones
  project     = var.project
  environment = terraform.workspace
  network     = google_compute_network.default.name
}
module "secondary-cluster" {
  name                                 = "secondary-cluster"
  source                               = "russmedia/kubernetes-cluster/google"
  version                              = "4.0.0"
  region                               = var.google_region
  zones                                = var.google_zones
  project                              = var.project
  environment                          = terraform.workspace
  network                              = google_compute_network.default.name
  nodes_subnet_ip_cidr_range           = "10.101.0.0/24"
  nodes_subnet_container_ip_cidr_range = "172.21.0.0/16"
  nodes_subnet_service_ip_cidr_range   = "10.201.0.0/16"
}

Note: secondary clusters need to have nodes_subnet_ip_cidr_range nodes_subnet_container_ip_cidr_range and nodes_subnet_service_ip_cidr_range defined, otherwise you will run into IP conflict. Also only one cluster can have nat_enabled set to 'true'.

add nat module (optional and deprecated - please use build in nat option - variable "nat_enabled")

Adding NAT module for outgoing Kubernetes IP:

module "nat" {
  source     = "github.com/GoogleCloudPlatform/terraform-google-nat-gateway?ref=1.2.0"
  region     = var.google_region
  project    = var.project
  network    = terraform.workspace
  subnetwork = "${terraform.workspace}-nodes-subnet"
  tags       = ["nat-${terraform.workspace}"]
}

Note: remember to add tag nat-${terraform.workspace} to primary cluster tags and node pools so NAT module can open routing for nodes.

using an existing or creating a new vpc network

Variable "network" is controling network creation.

subnetworks

Terraform always creates a subnetwork. The subnetwork name is taken from a pattern: ${terraform.workspace}-${var.name}-nodes-subnet. If you already have a subnetwork and you would like to keep the name - please define the "subnetwork_name" variable.

zonal and regional clusters

Regional clusters are still in beta, please use with caution. You can enable it by setting variable "regional_cluster" to true. Warning - possible data loss! - changing this setting on a running cluster will force you to recreate it.

cloud nat

You can configure your cluster to sit behind nat, and have the same static external IP shared between pods. You can enable it by setting variable "nat_enabled" to true

Warning - possible data loss! - changing this setting on a running cluster will force you to recreate it.

3. Migration

To migrate from 1.x.x module version to 2.x.x follow these steps:

Important note: when upgrading, default pool will be deleted. Before migration, please extend size of non-default pools to be able to schedule all applications without the default node pool.

4. Authors

5. License

This project is licensed under the MIT License - see the LICENSE.md file for details. Copyright (c) 2018 Russmedia GmbH.

6. Acknowledgments