GoogleCloudPlatform / gke-autoneg-controller

This GKE controller provides simple custom integration between GKE and GCLB.
Apache License 2.0
161 stars 51 forks source link
gclb gke

Autoneg GKE controller

License Tests

autoneg provides simple custom integration between GKE and Google Cloud Load Balancing (both external and internal). autoneg is a GKE-specific Kubernetes controller which works in conjunction with the GKE Network Endpoint Group (NEG) controller to manage integration between your Kubernetes service endpoints and GCLB backend services.

GKE users may wish to register NEG backends from multiple clusters into the same backend service, or may wish to orchestrate advanced deployment strategies in a custom or centralized fashion, or offer the same service via protected public endpoint and more lax internal endpoint. autoneg can enable those use cases.

How it works

autoneg depends on the GKE NEG controller to manage the lifecycle of NEGs corresponding to your GKE services. autoneg will associate those NEGs as backends to the GCLB backend service named in the autoneg configuration.

Since autoneg depends explicitly on the GKE NEG controller, it also inherits the same scope. autoneg only takes action based on a Kubernetes service which has been annotated with autoneg configuration, and does not make any changes corresponding to pods or deployments. Only changes to the service will cause any action by autoneg.

On deleting the Service object, autoneg will deregister NEGs from the specified backend service, and the GKE NEG controller will then delete the NEGs.

Using Autoneg

In your Kubernetes service, two annotations are required in your service definition:

Example annotations

metadata:
  annotations:
    cloud.google.com/neg: '{"exposed_ports": {"80":{},"443":{}}}'
    controller.autoneg.dev/neg: '{"backend_services":{"80":[{"name":"http-be","max_rate_per_endpoint":100}],"443":[{"name":"https-be","max_connections_per_endpoint":1000}]}}
    # For L7 ILB (regional) backends 
    # controller.autoneg.dev/neg: '{"backend_services":{"80":[{"name":"http-be","region":"europe-west4","max_rate_per_endpoint":100}],"443":[{"name":"https-be","region":"europe-west4","max_connections_per_endpoint":1000}]}}

Once configured, autoneg will detect the NEGs that are created by the GKE NEG controller, and register them with the backend service specified in the autoneg configuration annotation.

Only the NEGs created by the GKE NEG controller will be added or removed from your backend service. This mechanism should be safe to use across multiple clusters.

By default, autoneg will initialize the capacityScaler to 1, which means that the new backend will receive a proportional volume of traffic according to the maximum rate or connections per endpoint configuration. You can customize this default by supplying the initial_capacity variable, which may be useful to steer traffic in blue/green deployment scenarios. The capacityScaler mechanism can be used to manage traffic shifting in such uses cases as deployment or failover.

Autoneg Configuration

Specify options to configure the backends representing the NEGs that will be associated with the backend service. Options can be referenced in the backends section of the REST resource definition. Only options listed here are available in autoneg.

Options

Autoneg annotation options

Controller parameters

The controller parameters can be customized via changing the controller deployment.

IAM considerations

As autoneg is accessing GCP APIs, you must ensure that the controller has authorization to call those APIs. To follow the principle of least privilege, it is recommended that you configure your cluster with Workload Identity to limit permissions to a GCP service account that autoneg operates under. If you choose not to use Workload Identity, you will need to create your GKE cluster with the "cloud-platform" scope.

Security considerations

Installation

First, set up the GCP resources necessary to support Workload Identity, run the script:

PROJECT_ID=myproject deploy/workload_identity.sh

If you are using Shared VPC, ensure that the autoneg-system service account has the compute.networkUser role in the Shared VPC host project:

gcloud projects add-iam-policy-binding \
  --role roles/compute.networkUser \
  --member "serviceAccount:autoneg-system@${PROJECT_ID}.iam.gserviceaccount.com" \
  ${HOST_PROJECT_ID}

Lastly, on each cluster in your project where you'd like to install autoneg (version v1.1.0), run these two commands:

kubectl apply -f deploy/autoneg.yaml

kubectl annotate sa -n autoneg-system autoneg-controller-manager \
  iam.gke.io/gcp-service-account=autoneg-system@${PROJECT_ID}.iam.gserviceaccount.com

This will create all the Kubernetes resources required to support autoneg and annotate the default service account in the autoneg-system namespace to associate a GCP service account using Workload Identity.

Installation via Terraform

You can use the Terraform module in terraform/autoneg to deploy Autoneg in a GKE cluster of your choice. An end-to-end example is provided in the terraform/test directory as well (simply set your project_id).

Example:

provider "google" {
}

provider "kubernetes" {
  cluster_ca_certificate = "..."
  host                   = "..."
  token                  = "..."
}

module "autoneg" {
  source = "github.com/GoogleCloudPlatform/gke-autoneg-controller//terraform/autoneg"

  project_id = "your-project-id"

  # NOTE: You may need to build your own image if you rely on features merged between releases, and do
  # not wish to use the `latest` image.
  controller_image = "ghcr.io/googlecloudplatform/gke-autoneg-controller/gke-autoneg-controller:v1.1.0"
}

Installation via Helm charts

A Helm chart is also provided in deploy/chart and via https://googlecloudplatform.github.io/gke-autoneg-controller/ repository.

To deploy via command line, simply run:

# helm install -n autoneg-system --create-namespace --set 'createNamespace=false' autoneg deploy/chart/ 

You can also use it with Terraform like this:

module "autoneg" {
  source = "github.com/GoogleCloudPlatform/gke-autoneg-controller//terraform/gcp?ref=master"

  project_id         = module.project.project_id
  service_account_id = "autoneg"
  workload_identity = {
    namespace       = "autoneg-system"
    service_account = "autoneg-controller-manager"
  }
  # To add shared VPC configuration, also set shared_vpc variable
}

resource "helm_release" "autoneg" {
  name       = "autoneg"
  chart      = "autoneg-controller-manager"
  repository = "https://googlecloudplatform.github.io/gke-autoneg-controller/"
  namespace  = "autoneg-system"

  create_namespace = true

  set {
    name  = "createNamespace"
    value = false
  }

  set {
    name  = "serviceAccount.annotations.iam\\.gke\\.io/gcp-service-account"
    value = module.autoneg.service_account_email
  }

  set {
    name  = "serviceAccount.automountServiceAccountToken"
    value = true
  }
}

Customizing your installation

autoneg is based on Kubebuilder, and as such, you can customize and deploy autoneg according to the Kubebuilder "Run It On the Cluster" section of the Quick Start. autoneg does not define a CRD, so you can skip any Kubebuilder steps involving CRDs.

The included deploy/autoneg.yaml is the default output of Kubebuilder's make deploy step, coupled with a public image.

Do keep in mind the additional configuration to enable Workload Identity.