hashicorp / terraform-provider-google

Terraform Provider for Google Cloud Platform
https://registry.terraform.io/providers/hashicorp/google/latest/docs
Mozilla Public License 2.0
2.33k stars 1.73k forks source link

workbench instance does not support n2d-highcpu/highmem-ish machine family #17363

Closed liusha-H closed 8 months ago

liusha-H commented 8 months ago

Hi

I attempted to create two workbench instances one use n2d-highmem-80, one will use n2d-highcpu-8, i'm sure this machine can be found in workbench hardware config in UI, but when i created the following templates

for highmem

resource "google_compute_network" "my_network" {
  name = "wbi-test-default"
  auto_create_subnetworks = false
}

resource "google_compute_subnetwork" "my_subnetwork" {
  name   = "wbi-test-default"
  network = google_compute_network.my_network.id
  region = "us-central1"
  ip_cidr_range = "10.0.1.0/24"
}

resource "google_workbench_instance" "instance" {
  name = "workbench-instance"
  location = "us-central1-a"

  gce_setup {
    machine_type = "n2d-highmem-80"

    disable_public_ip = false

    service_accounts {
      email = "my@service-account.com"
    }

    boot_disk {
      disk_size_gb  = 100
      disk_type = "PD_SSD"
      disk_encryption = "GMEK"
    }

    data_disks {
      disk_size_gb  = 200
      disk_type = "PD_SSD"
      disk_encryption = "GMEK"
    }

    network_interfaces {
      network = google_compute_network.my_network.id
      subnet = google_compute_subnetwork.my_subnetwork.id
      nic_type = "GVNIC"
    }

    metadata = {
      terraform = "true"
    }

    enable_ip_forwarding = true

    tags = ["abc", "def"]

  }

  disable_proxy_access = "true"

  instance_owners  = [ "my@service-account.com"]

  labels = {
    k = "val"
  }

  desired_state = "ACTIVE"

}

and for highcpu

resource "google_compute_network" "my_network" {
  name = "wbi-test-default"
  auto_create_subnetworks = false
}

resource "google_compute_subnetwork" "my_subnetwork" {
  name   = "wbi-test-default"
  network = google_compute_network.my_network.id
  region = "us-central1"
  ip_cidr_range = "10.0.1.0/24"
}

resource "google_workbench_instance" "instance" {
  name = "workbench-instance"
  location = "us-central1-a"

  gce_setup {
    machine_type = "n2d-highcpu-8"

    disable_public_ip = false

    service_accounts {
      email = "my@service-account.com"
    }

    boot_disk {
      disk_size_gb  = 100
      disk_type = "PD_SSD"
      disk_encryption = "GMEK"
    }

    data_disks {
      disk_size_gb  = 300
      disk_type = "PD_SSD"
      disk_encryption = "GMEK"
    }

    network_interfaces {
      network = google_compute_network.my_network.id
      subnet = google_compute_subnetwork.my_subnetwork.id
      nic_type = "GVNIC"
    }

    metadata = {
      terraform = "true"
    }

    enable_ip_forwarding = true

    tags = ["abc", "def"]

  }

  disable_proxy_access = "true"

  instance_owners  = [ "my@service-account.com"]

  labels = {
    k = "val"
  }

  desired_state = "ACTIVE"
}

my plan does not display any issue, but the apply returns the following error messages

Error creating Instance: googleapi: Error 400: machine family "n2d-highmem-80" not supported: invalid argument

and

Error creating Instance: googleapi: Error 400: machine family "n2d-highcpu-8" not supported: invalid argument

wondering is this is a bjug or did i make any mistake in the workbench instance config? also if the workbench instance will include other machine families just like the user managed instance in the future?

thanks

c2thorn commented 8 months ago

From what I can see in your config, you have not made a mistake. I also verify that the machine types are available in the console.

From what I can tell looking internally, this is a known API-level limitation that is being worked. However, I don't have a definitive ETA.

Going to close here since this is not due to Terraform. If you have follow up questions, please reach out to GCP support.

github-actions[bot] commented 7 months ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.