hashicorp / terraform-provider-google

Terraform Provider for Google Cloud Platform
https://registry.terraform.io/providers/hashicorp/google/latest/docs
Mozilla Public License 2.0
2.29k stars 1.72k forks source link

improve docs for multiple backend in google_compute_backend_service #3498

Open mrgleeco opened 5 years ago

mrgleeco commented 5 years ago

Description

There lacks good understanding how the backend argument should appear when there is more than one backend. Is it a list? Do i just repeat the block?

Furthermore, for list of backend, how to programatically include that list. if we assume it came from google_compute_instance_group_manager where count > 1, how does that list munge in here?

New or Affected Resource(s)

Potential Terraform Configuration

i can fake it with hard-coded, but how to get the list semantic right here so that it can be generated from the list itself

  backend = {
    group = "${google_compute_instance_group_manager.example.*.instance_group[0]}"
  }
  backend = {
    group = "${google_compute_instance_group_manager.example.*.instance_group[1]}"
  }
  backend = {
    group = "${google_compute_instance_group_manager.example.*.instance_group[2]}"
 }
  backend = {
    group = "${google_compute_instance_group_manager.example.*.instance_group[3]}"
  }

But not at all clear what the format looks like for multiple

jeremywadsack commented 5 years ago

I was able to accomplish multiple backend with something like this:

// main.tf
module "lb" {
  source  = "./gclb"
  backend = "${google_container_cluster.primary.instance_group_urls}"
}
// gclb/variables.tf
variable "backends" {
  description = "Map backend indices to list of backend maps."
  type        = "map"
  default = {
    // One key per health check (or per route in url-map)
    "0" = [
      for backend in var.backend:
      { group = backend }
    ]
    "1" = [
      for backend in var.backend:
      { group = backend }
    ]
    "2" = [
      for backend in var.backend:
      { group = backend }
    ]
  }
}
// gclb/main.tf
resource "google_compute_backend_service" "default" {
  name            = "${var.name}-backend-${count.index}"

  dynamic "backend" {
    for_each = [for b in var.backends[count.index]: b]
    content {
      balancing_mode               = lookup(backend.value, "balancing_mode", null)
      capacity_scaler              = lookup(backend.value, "capacity_scaler", null)
      description                  = lookup(backend.value, "description", null)
      group                        = lookup(backend.value, "group", null)
      max_connections              = lookup(backend.value, "max_connections", null)
      max_connections_per_instance = lookup(backend.value, "max_connections_per_instance", null)
      max_rate                     = lookup(backend.value, "max_rate", null)
      max_rate_per_instance        = lookup(backend.value, "max_rate_per_instance", null)
      max_utilization              = lookup(backend.value, "max_utilization", null)
    }
  }
}

I recognize that the multiple entries per URL route adds complexity beyond your problem.

See this post for more details on the for and for_each keywords.

hawksight commented 5 years ago

I'm having the same problem. I keep getting this error (once I have something that will plan)

Error: Provider produced inconsistent final plan

When expanding the plan for
module.cluster-lb.google_compute_backend_service.public to include new values
learned so far during apply, provider "google" produced an invalid new value
for .backend: block set length changed from 1 to 2.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Rather than using a numbered map like in the original module I have decided to split my backends to be named explicitly and pass them configuration each. I've also decided to just supply the instance_group_urls directly as I output them from my cluster module, and I thought it would be easy to pass that pass into for or for_each to create the required structure for the backend block.

Here's two of the backends with slightly varying examples that will plan, but fail on application:

resource "google_compute_backend_service" "public" {
  project     = var.project
  name        = "${var.name}-backend-public"
  port_name   = var.backend_public["port_name"]
  protocol    = "HTTP"
  timeout_sec = var.backend_public["timeout_seconds"]
  dynamic "backend" {
    for_each = [ for b in var.backend_group_list: b ]
    content {
      group = backend.value
    }
  }

  health_checks = list(google_compute_health_check.public.self_link)
}

resource "google_compute_backend_service" "private" {
  project     = var.project
  name        = "${var.name}-backend-private"
  port_name   = var.backend_private["port_name"]
  protocol    = "HTTP"
  timeout_sec = var.backend_private["timeout_seconds"]
  dynamic "backend" {
    for_each = var.backend_group_list
    content {
      group = backend.value
    }
  }
  health_checks = list(google_compute_health_check.private.self_link)

  iap {
    oauth2_client_id     = var.iap_oauth_id
    oauth2_client_secret = var.iap_oauth_secret
  }
}

With these vars as example:

variable "backend_group_list" {
  description = "Map backend indices to list of backend maps."
  type        = list
  default     = []
}

variable "backend_public" {
  description = "Parameters to the public backend"
  type = object({
    enabled         = bool
    health_path     = string
    port_name       = string
    port_number     = number
    timeout_seconds = number
    iap_enabled     = bool
  })

  default = {
    enabled         = true
    health_path     = "/"
    port_name       = "http"
    port_number     = 30100
    timeout_seconds = 30
    iap_enabled     = false
  }
}

Where I supply the value to the load balancer module as such (retrieved from a data lookup in cluster module and output):

backend_group_list = module.cluster.K8S_INSTANCE_GROUP_URLS

At the planning stage the backend sections look like the following:

... public...
      + backend {
          + balancing_mode  = "UTILIZATION"
          + capacity_scaler = 1
          + group           = (known after apply)
          + max_utilization = 0.8
        }
... private ...
      + backend {
          + balancing_mode  = "UTILIZATION"
          + capacity_scaler = 1
          + group           = (known after apply)
          + max_utilization = 0.8
        }

Like the issues raiser, i's not clear from the the documentation whether I should be passing multiple backend blocks to compute_backend_service, i.e.

resource "google_compute_backend_service" "private" {
  project     = var.project
  name        = "${var.name}-backend-private"
  port_name   = var.backend_private["port_name"]
  protocol    = "HTTP"
  timeout_sec = var.backend_private["timeout_seconds"]
  backend {...}
  backend {...}
  health_checks = list(google_compute_health_check.private.self_link)
}

Or many groups to a single backend. I think it's the first, but I don't think the plan is accounting for more than one backend block looking at the output.

For reference, at this point in time the cluster is built and it has two node pools with a single node each, so I have two instance_group_urls I'm trying to pass into each backend. In other projects we usually have about 6 different urls.

Does my TF code look sensible / like it should work?

EDIT: Just adding that when I have only a single URL in the var.backend_group_list then everything is created just fine. The issue appears when you have more than one URL. This could be a bug in the resource?


EDIT: Adding version info:

Terraform v0.12.2
+ provider.google v2.9.1
+ provider.null v2.1.2
+ provider.random v2.1.2
+ provider.template v2.1.2

EDIT: should I raise a separate issue for the error I'm seeing?

rileykarson commented 5 years ago

@hawksight I think that can be broken to a separate issue. Debug logs would be a big help as well, they can give us insight in to some of Terraform Core's early errors. That error is coming from Core believing the provider has broken the terraform plan contract, although that doesn't make much sense because the error doesn't match the plan.