hashicorp / terraform-provider-google

Terraform Provider for Google Cloud Platform
https://registry.terraform.io/providers/hashicorp/google/latest/docs
Mozilla Public License 2.0
2.33k stars 1.73k forks source link

`default_ttl cannot be specified with USE_ORIGIN_HEADERS cache_mode` #18860

Open lukehutch opened 3 months ago

lukehutch commented 3 months ago

Community Note

Terraform Version & Provider Version(s)

$ terraform version
Terraform v1.9.3
on linux_amd64
+ provider registry.terraform.io/hashicorp/google v4.51.0

Affected Resource(s)

google_compute_backend_service

Terraform Configuration

```terraform
resource "google_compute_backend_service" "web" {
  name       = "serverpod-${var.runmode}-backend-web"
  protocol   = "HTTP"
  enable_cdn = true
  cdn_policy {
    cache_mode = "USE_ORIGIN_HEADERS"
    cache_key_policy {
      include_host = true
      include_protocol = true
      include_query_string = true
      include_http_headers = ["Cache-Control"]
    }
  }

  backend {
    group           = google_compute_instance_group_manager.serverpod.instance_group
    balancing_mode  = "UTILIZATION"
    max_utilization = 1.0
    capacity_scaler = 1.0
  }

  health_checks = [google_compute_health_check.serverpod-balancer.id]

  port_name = "web"
}

Debug Output

module.serverpod_production.google_compute_backend_service.web: Modifying... [id=projects/clicksocial-app/global/backendServices/serverpod-production-backend-web]
β•·
β”‚ Error: Error updating BackendService "projects/clicksocial-app/global/backendServices/serverpod-production-backend-web": googleapi: Error 400: Invalid value for field 'resource.cdnPolicy.defaultTtl': '3600'. default_ttl cannot be specified with USE_ORIGIN_HEADERS cache_mode., invalid
β”‚ 
β”‚   with module.serverpod_production.google_compute_backend_service.web,
β”‚   on .terraform/modules/serverpod_production/load_balancer.tf line 153, in resource "google_compute_backend_service" "web":
β”‚  153: resource "google_compute_backend_service" "web" {
β”‚ 
β•΅

Expected Behavior

I should be able to specify CDN policy for a GCP server.

Actual Behavior

I added the cdn_policy block to this resource, and the above error was generated. google_compute_backend_service erroneously adds a default_ttl field.

Steps to reproduce

  1. terraform apply

Important Factoids

No response

References

I found there are a lot of issues about this, but it was supposed to have been fixed a year ago. Here are a few of them:

https://github.com/pulumi/pulumi-gcp/issues/711 https://github.com/GoogleCloudPlatform/magic-modules/pull/7588 https://github.com/hashicorp/terraform-provider-google/issues/10622

ggtisc commented 2 months ago

Hi @lukehutch!

As you can see in terraform registry and Google Cloud API it is declared the next:

[group](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_backend_service#group) - (Required) The fully-qualified URL of an Instance Group or Network Endpoint Group resource. In case of instance group this defines the list of instances that serve traffic. Member virtual machine instances from each instance group must live in the same zone as the instance group itself. No two backends in a backend service are allowed to use same Instance Group resource. For Network Endpoint Groups this defines list of endpoints. All endpoints of Network Endpoint Group must be hosted on instances located in the same zone as the Network Endpoint Group. Backend services cannot mix Instance Group and Network Endpoint Group backends. Note that you must specify an Instance Group or Network Endpoint Group resource using the fully-qualified URL, rather than a partial URL.

The important question here is... Was it possible for you to create the google_compute_backend_service resource with a terraform apply? If that is the case we need the full code including the google_compute_instance_group_manager resource to replicate this issue and report it. If not I suggest you follow the documentation rules and use a google_compute_global_network_endpoint_group or a google_compute_instance_group depending on your needs.

lukehutch commented 2 months ago

I was able to create the google_compute_backend_service with terraform apply before I added the cdn_policy block, but when I added that block, terraform apply no longer worked, it failed with the error that I indicated.

My Terraform script is as follows. You can see that it depends upon https://github.com/serverpod/terraform-google-serverpod-cloud-engine . I had tweaked that template in a local fork, to provide the cache_mode -- you can see the PR here: https://github.com/serverpod/terraform-google-serverpod-cloud-engine/pull/11/files

# Set up and configure Terraform and the Google Cloud provider.
terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "4.51.0"
    }
  }
}

provider "google" {
  credentials = file("credentials.json")

  project = var.project
  region  = var.region
  zone    = var.zone
}

# Add a Serverpod module configured for production. Full documentation on all
# options is available at:
# https://github.com/serverpod/terraform-google-serverpod-cloud-engine

module "serverpod_production" {
  # References the Serverpod module from GitHub.
  source = "github.com/serverpod/terraform-google-serverpod-cloud-engine"

  # Required parameters.
  project               = var.project
  service_account_email = var.service_account_email

  runmode = "production"

  region = var.region
  zone   = var.zone

  dns_managed_zone = var.dns_managed_zone
  top_domain       = var.top_domain

  # Size of the auto scaling group.
  autoscaling_min_size = 1
  autoscaling_max_size = 2

  # Adds Cloud Storage buckets for file uploads.
  enable_storage = true

  # Makes it possible to SSH into the individual server instances.
  enable_ssh = true

  database_version = "POSTGRES_15"

  # Password for the production database.
  database_password = var.DATABASE_PASSWORD_PRODUCTION

  # TODO switch on when going live
  database_deletion_protection = false

  # Database tier:
  #   https://cloud.google.com/sql/docs/postgres/instance-settings
  #
  # Available tiers:
  #   db-f1-micro (tiny, no SLA, for testing only)
  #   db-custom-1-3840
  #   db-custom-2-7680
  #   db-custom-4-15360
  #   db-custom-8-30720
  #   db-custom-16-61440
  #   db-custom-32-122880
  #   db-custom-64-245760
  #   db-custom-96-368640
  #
  # N.B.
  # - the first number is the number of cores, and the second number is
  #   the amount of memory per core (MB).
  # - There are also highmem tiers in addition to these.
  database_tier = "db-custom-1-3840"

  # Machine type for the API server instances -- default was "e2-micro".
  # https://cloud.google.com/compute/docs/general-purpose-machines#n1_machines
  machine_type = "n1-standard-1"

  # Adds Redis for caching and communication between servers.
  enable_redis = true

  redis_version = "REDIS_7_2"

  subdomain_web = "www"
  use_top_domain_for_web = true
}
ggtisc commented 2 months ago

If you want to use compute instances you cannot use google_compute_instance_group_manager. According to the documentation you need to use google_compute_instance_group like in this example (code below)

data "google_compute_image" "ci_debian_18860" {
  family  = "debian-11"
  project = "debian-cloud"
}

resource "google_compute_instance" "ci_18860" {
  name         = "ci-18860"
  machine_type = "e2-medium"
  zone         = "us-central1-c"
  boot_disk {
    initialize_params {
      image = data.google_compute_image.ci_debian_18860.self_link
    }
  }

  network_interface {
    network = "default"
  }
}

resource "google_compute_instance_group" "cig_18860" {
  name      = "cig-18860"
  zone      = "us-central1-c"
  instances = [google_compute_instance.ci_18860.id]
  named_port {
    name = "http"
    port = "8080"
  }

  named_port {
    name = "https"
    port = "8443"
  }

  lifecycle {
    create_before_destroy = true
  }
}

resource "google_compute_https_health_check" "https_hc_18860" {
  name         = "https-hc-18860"
  request_path = "/health_check"
}

resource "google_compute_backend_service" "cbs_18860" {
  name      = "cbs-18860"
  port_name = "https"
  protocol  = "HTTPS"

  backend {
    group = google_compute_instance_group.cig_18860.id #use this instead of google_compute_instance_group_manager, 
since this is not allowed
  }

  health_checks = [
    google_compute_https_health_check.https_hc_18860.id
  ]
}

After this you could add your configurations as you mentioned or add them at creation time:

data "google_compute_image" "ci_debian_18860" {
  family  = "debian-11"
  project = "debian-cloud"
}

resource "google_compute_instance" "ci_18860" {
  name         = "ci-18860"
  machine_type = "e2-medium"
  zone         = "us-central1-c"
  boot_disk {
    initialize_params {
      image = data.google_compute_image.ci_debian_18860.self_link
    }
  }

  network_interface {
    network = "default"
  }
}

resource "google_compute_instance_group" "cig_18860" {
  name      = "cig-18860"
  zone      = "us-central1-c"
  instances = [google_compute_instance.ci_18860.id]
  named_port {
    name = "http"
    port = "8080"
  }

  named_port {
    name = "https"
    port = "8443"
  }

  lifecycle {
    create_before_destroy = true
  }
}

resource "google_compute_https_health_check" "https_hc_18860" {
  name         = "https-hc-18860"
  request_path = "/health_check"
}

resource "google_compute_backend_service" "cbs_18860" {
  name      = "cbs-18860"
  port_name = "https"
  protocol  = "HTTPS"

  ##################################################### new configurations, can be included at creation time
  enable_cdn = true
  cdn_policy {
    cache_mode = "USE_ORIGIN_HEADERS"
    cache_key_policy {
      include_host = true
      include_protocol = true
      include_query_string = true
      include_http_headers = ["Cache-Control"]
    }
  }
  #####################################################

  backend {
    group = google_compute_instance_group.cig_18860.id

    ##################################################### new configurations, can be included at creation time
    balancing_mode  = "UTILIZATION"
    max_utilization = 1.0
    capacity_scaler = 1.0
    #####################################################
  }

  health_checks = [
    google_compute_https_health_check.https_hc_18860.id
  ]
}
lukehutch commented 2 months ago

/cc @Isakdl @Vlidholt please take a look at this

lukehutch commented 2 months ago

If you want to use compute instances you cannot use google_compute_instance_group_manager. According to the documentation you need to use google_compute_instance_group like in this example (code below)

@ggtisc what are the consequences of using group_manager rather than group? A Serverpod developer reported "the scripts work for us".

ggtisc commented 2 months ago

I couldn't say that there are consequences since it is not something like advantages or disadvantages, rather you must understand the use case and implement the logic according to the requirements.

As you can see in terraform registry documentation:

For more information I suggest you read the Google Cloud documentation and analyze your business model, since these are already personal matters and exceed our scope.