This Repo Is No Longer Maintained
Please consider migrating to official terraform-google-kubernetes-engine module: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine.
GKE Kubernetes module with node pools submodule
Please use google provider version = "~> 3.14"
If you need more control with versioning of your cluster, it is advised to specify "min_master_version" and "version" in node-pools. Otherwise GKE will be using default version which might change in near future.
This module is meant for use with Terraform 0.12. If you haven't upgraded and need a Terraform 0.11.x-compatible version of this module, the last released version intended for Terraform 0.11.x is 3.0.0.
module "primary-cluster" {
name = terraform.workspace
source = "russmedia/kubernetes-cluster/google"
version = "4.0.0"
region = var.google_region
zones = var.google_zones
project = var.project
environment = terraform.workspace
min_master_version = var.master_version
}
module "primary-cluster" {
name = "my-cluster"
source = "russmedia/kubernetes-cluster/google"
version = "4.0.0"
region = var.google_region
zones = var.google_zones
project = var.project
environment = terraform.workspace
min_master_version = var.master_version
node_pools = var.node_pools
}
and in variables:
node_pools = [
{
name = "default-pool"
initial_node_count = 1
min_node_count = 1
max_node_count = 1
version = "1.15.11-gke.3"
image_type = "COS"
machine_type = "n1-standard-1"
preemptible = true
tags = "tag1 nat"
},
]
Note: at least one node pool must have initial_node_count
> 0.
no_schedule_taint
and no_execute_taint
- they will add schedulable=equals:NoSchedule
or executable=equals:NoExecute
- which will effect in only specific nodes being scheduled on those nodes. Please see k8s docs for more info.Example usage with "NoSchedule":
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
tolerations:
- key: "schedulable"
operator: "Exists"
effect: "NoSchedule"
Example usage with "NoExecute":
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
tolerations:
- key: "executable"
operator: "Exists"
effect: "NoExecute"
Note - if node has both taints NoExecute and NoSchedule - you need to add both tolerations to pod to be allowed there.
Due to current limitations with depends_on feature and modules it is advised to create vpc network separately and use it when defining modules, i.e:
resource "google_compute_network" "default" {
name = terraform.workspace
auto_create_subnetworks = "false"
project = var.project
}
module "primary-cluster" {
name = "primary-cluster"
source = "russmedia/kubernetes-cluster/google"
version = "4.0.0"
region = var.google_region
zones = var.google_zones
project = var.project
environment = terraform.workspace
network = google_compute_network.default.name
}
module "secondary-cluster" {
name = "secondary-cluster"
source = "russmedia/kubernetes-cluster/google"
version = "4.0.0"
region = var.google_region
zones = var.google_zones
project = var.project
environment = terraform.workspace
network = google_compute_network.default.name
nodes_subnet_ip_cidr_range = "10.101.0.0/24"
nodes_subnet_container_ip_cidr_range = "172.21.0.0/16"
nodes_subnet_service_ip_cidr_range = "10.201.0.0/16"
}
Note: secondary clusters need to have nodes_subnet_ip_cidr_range nodes_subnet_container_ip_cidr_range and nodes_subnet_service_ip_cidr_range defined, otherwise you will run into IP conflict. Also only one cluster can have nat_enabled set to 'true'.
Adding NAT module for outgoing Kubernetes IP:
module "nat" {
source = "github.com/GoogleCloudPlatform/terraform-google-nat-gateway?ref=1.2.0"
region = var.google_region
project = var.project
network = terraform.workspace
subnetwork = "${terraform.workspace}-nodes-subnet"
tags = ["nat-${terraform.workspace}"]
}
Note: remember to add tag nat-${terraform.workspace}
to primary cluster tags and node pools so NAT module can open routing for nodes.
Variable "network" is controling network creation.
network=""
) - terraform will create a vpc network - network name will be equal to ${terraform.workspace}
.Terraform always creates a subnetwork. The subnetwork name is taken from a pattern: ${terraform.workspace}-${var.name}-nodes-subnet
. If you already have a subnetwork and you would like to keep the name - please define the "subnetwork_name" variable.
nodes_subnet_ip_cidr_range
variable - terraform will fail with conflict if you use existing netmasknodes_subnet_container_ip_cidr_range
variablenodes_subnet_service_ip_cidr_range
variableRegional clusters are still in beta, please use with caution. You can enable it by setting variable "regional_cluster" to true. Warning - possible data loss! - changing this setting on a running cluster will force you to recreate it.
You can configure your cluster to sit behind nat, and have the same static external IP shared between pods. You can enable it by setting variable "nat_enabled" to true
Warning - possible data loss! - changing this setting on a running cluster will force you to recreate it.
To migrate from 1.x.x
module version to 2.x.x
follow these steps:
tags
property -> it is included now in node_pools
map.node_version
property -> it is included now in node_pools
map.initial_node_count
to all node pools -> changing the previous value will recreate the node pool.network
with existing network name.subnetwork_name
with existing subnetwork name.use_existing_terraform_network
set to true
if network was created by this module.Important note: when upgrading, default pool will be deleted. Before migration, please extend size of non-default pools to be able to schedule all applications without the default node pool.
This project is licensed under the MIT License - see the LICENSE.md file for details. Copyright (c) 2018 Russmedia GmbH.