GoogleCloudPlatform / cluster-toolkit

Cluster Toolkit is an open-source software offered by Google Cloud which makes it easy for customers to deploy AI/ML and HPC environments on Google Cloud.
Apache License 2.0
190 stars 125 forks source link

private_vpc_connection is not unique per cluster in slurm-sql module #2054

Closed ek-nag closed 5 months ago

ek-nag commented 9 months ago

Describe the bug

module.slurm-sql.google_service_networking_connection.private_vpc_connection is not unique per cluster. When two clusters are created in the same VPC and subnet both of them relies on the same google_service_networking_connection. When one cluster is destroyed the google_service_networking_connection gets removed breaking slurm-sql operation for the remaining cluster.

resource "google_service_networking_connection" "private_vpc_connection" {
  provider = google-beta

  network                 = var.network_id
  service                 = "servicenetworking.googleapis.com"
  reserved_peering_ranges = google_compute_global_address.private_ip_address[*].name
}

The easiest solution would be adding unique name argument to the google_service_networking_connection resource same as we doing for google_compute_global_address.private_ip_address.

resource "google_compute_global_address" "private_ip_address" {
  provider = google-beta

  name          = "slurm-cloudsql-private-ip-${random_id.resource_name_suffix.hex}"
  purpose       = "VPC_PEERING"
  address_type  = "INTERNAL"
  prefix_length = 16
  network       = var.network_id
}

However, module in Terraform provider does not have name argument.

Steps to reproduce

Steps to reproduce the behavior:

  1. Deploy two cluster in the same VPC and subnet including that use community/modules/database/slurm-cloudsql-federation module. For example:
    - source: community/modules/database/slurm-cloudsql-federation
    kind: terraform
    id: slurm-sql
    use: [hpc_network]
    settings:
      sql_instance_name: sql-cluster-b62b0a5b
      tier: "db-g1-small"

Expected behavior

Make module.slurm-sql.google_service_networking_connection.private_vpc_connection unique.

Actual behavior

If two or more clusters reside in the same VPC and subnet they all use same module.slurm-sql.google_service_networking_connection.private_vpc_connection and when one of the clusters is deleted the `module.slurm-sql.google_service_networking_connection.private_vpc_connection is also removed breaking remaining clusters access to cloudsql slurm accounting database.

Version (ghpc --version)

ghpc version v1.26.1

Blueprint

If applicable, attach or paste the blueprint YAML used to produce the bug.

blueprint_name: cluster-b62b0a5b

vars:
  project_id: eimantask-personal-project
  deployment_name: cluster-b62b0a5b
  region: us-east1
  zone: us-east1-b
  enable_reconfigure: True
  enable_cleanup_compute: False
  enable_cleanup_subscriptions: True
  enable_bigquery_load: True
  instance_image_custom: True
  labels:
    created_by: liveupv3-server

deployment_groups:
- group: primary
  modules:
  - source: modules/network/pre-existing-vpc
    kind: terraform
    settings:
      network_name: rich-hedgehog-network
      subnetwork_name: rich-hedgehog-subnet-2
    id: hpc_network

  - source: modules/file-system/pre-existing-network-storage
    kind: terraform
    id: mount_num_0
    settings:
      server_ip: '$controller'
      remote_mount: /opt/cluster
      local_mount: /opt/cluster
      mount_options: defaults,nofail,nosuid
      fs_type: nfs

  - source: modules/file-system/pre-existing-network-storage
    kind: terraform
    id: mount_num_1
    settings:
      server_ip: '$controller'
      remote_mount: /home
      local_mount: /home
      mount_options: defaults,nofail,nosuid
      fs_type: nfs

  - source: community/modules/project/service-account
    kind: terraform
    id: hpc_service_account
    settings:
      project_id: eimantask-personal-project
      name: sa
      project_roles:
      - compute.instanceAdmin.v1
      - iam.serviceAccountUser
      - monitoring.metricWriter
      - logging.logWriter
      - storage.objectAdmin
      - pubsub.admin
      - compute.securityAdmin
      - iam.serviceAccountAdmin
      - resourcemanager.projectIamAdmin
      - compute.networkAdmin

  - source: community/modules/compute/schedmd-slurm-gcp-v5-partition
    kind: terraform
    id: partition_0
    use:
    - partition_0-group
    - hpc_network
    - mount_num_0
    - mount_num_1
    settings:
      partition_name: batch
      subnetwork_self_link: rich-hedgehog-subnet-2
      enable_placement: False
      exclusive: False
  - source: community/modules/compute/schedmd-slurm-gcp-v5-node-group
    id: partition_0-group
    use:
    - hpc_network
    - mount_num_0
    - mount_num_1
    settings:
      enable_smt: False
      machine_type: c2-standard-60
      node_count_dynamic_max: 4
      node_count_static: 0
      disk_size_gb: 50
      disk_type: pd-standard

  - source: community/modules/database/slurm-cloudsql-federation
    kind: terraform
    id: slurm-sql
    use: [hpc_network]
    settings:
      sql_instance_name: sql-cluster-b62b0a5b
      tier: "db-g1-small"

  - source: community/modules/scheduler/schedmd-slurm-gcp-v5-controller
    kind: terraform
    id: slurm_controller
    settings:
      cloud_parameters:
        resume_rate: 0
        resume_timeout: 500
        suspend_rate: 0
        suspend_timeout: 300
        no_comma_params: false
      machine_type: n2-standard-2
      disk_type: pd-standard
      disk_size_gb: 50

      service_account:
        email: $(hpc_service_account.service_account_email)
        scopes:
        - https://www.googleapis.com/auth/cloud-platform
        - https://www.googleapis.com/auth/monitoring.write
        - https://www.googleapis.com/auth/logging.write
        - https://www.googleapis.com/auth/devstorage.read_write
        - https://www.googleapis.com/auth/pubsub
      controller_startup_script: |
        #!/bin/bash
        echo "******************************************** CALLING CONTROLLER STARTUP"
        gsutil cp gs://liveupv3-storage-0894/clusters/3/bootstrap_controller.sh - | bash
      compute_startup_script: |
        #!/bin/bash
        gsutil cp gs://liveupv3-storage-0894/clusters/3/bootstrap_compute.sh - | bash
    use:
    - hpc_network
    - partition_0
    - mount_num_0
    - mount_num_1
    - slurm-sql

  - source: community/modules/scheduler/schedmd-slurm-gcp-v5-login
    kind: terraform
    id: slurm_login
    settings:
      num_instances: 1
      subnetwork_self_link: rich-hedgehog-subnet-2
      machine_type: n2-standard-2
      disk_type: pd-standard
      disk_size_gb: 50

      service_account:
        email: $(hpc_service_account.service_account_email)
        scopes:
        - https://www.googleapis.com/auth/cloud-platform
        - https://www.googleapis.com/auth/monitoring.write
        - https://www.googleapis.com/auth/logging.write
        - https://www.googleapis.com/auth/devstorage.read_write
      startup_script: |
        #!/bin/bash
        echo "******************************************** CALLING LOGIN STARTUP"
        gsutil cp gs://liveupv3-storage-0894/clusters/3/bootstrap_login.sh - | bash
    use:
    - slurm_controller
    - hpc_network
    - mount_num_0
    - mount_num_1
harshthakkar01 commented 9 months ago

Hi,

This seems to be a limitation with terraform/gcp and there might not be an easy way to make it unique. This will need more investigation on whether creating google_service_networking_connection out of the SQL module into VPC module would be a better way or not. Since the SQL module is community experimental, we cannot make any specific commitment about timeline.

github-actions[bot] commented 7 months ago

This issue is stale because it has been open for 30 days with no activity.

cboneti commented 6 months ago

Hi Eimantas,

On the PR 2397 I propose to decouple the configuration of Private Service Access from the Cloud SQL creation. This would allow you to configure it once (for the lifetime of the VPC) and then use it across multiple clusters in that VPC, which I believe addresses your key concern here.

With this change, the following blueprint is valid (as an example):

blueprint_name: two-clusters

vars:
  project_id: hpc-toolkit-demo
  deployment_name: clstr2
  region: us-east1
  zone: us-east1-b
  enable_reconfigure: True
  enable_cleanup_compute: False
  enable_cleanup_subscriptions: True
  enable_bigquery_load: True
  instance_image_custom: True

deployment_groups:
- group: net
  modules:
  - source: modules/network/vpc
    id: hpc_network

  - source: ./community/modules/network/private-service-access
    id: ps-connect
    use: [ hpc_network ]

- group: cluster1
  modules:
  - source: community/modules/compute/schedmd-slurm-gcp-v5-partition
    kind: terraform
    id: c1partition_0
    use:
    - c1partition_0-group
    - hpc_network
    settings:
      partition_name: c1batch
      enable_placement: False
      exclusive: False

  - source: community/modules/compute/schedmd-slurm-gcp-v5-node-group
    id: c1partition_0-group
    use:
    settings:
      enable_smt: False
      machine_type: c2-standard-60
      node_count_dynamic_max: 4
      node_count_static: 0
      disk_size_gb: 50
      disk_type: pd-standard

  - source: ./community/modules/database/slurm-cloudsql-federation
    kind: terraform
    id: c1slurm-sql
    use: [hpc_network, ps-connect]
    settings:
      sql_instance_name: sql-cluster-1
      tier: "db-g1-small"

  - source: community/modules/scheduler/schedmd-slurm-gcp-v5-controller
    kind: terraform
    id: c1slurm_controller
    settings:
      cloud_parameters:
        resume_rate: 0
        resume_timeout: 500
        suspend_rate: 0
        suspend_timeout: 300
        no_comma_params: false
      machine_type: n2-standard-2
      disk_type: pd-standard
      disk_size_gb: 50
      slurm_cluster_name: cluster1
    use:
    - hpc_network
    - c1partition_0
    - c1slurm-sql

  - source: community/modules/scheduler/schedmd-slurm-gcp-v5-login
    kind: terraform
    id: c1slurm_login
    use: [c1slurm_controller, hpc_network]
    settings:
      num_instances: 1
      machine_type: n2-standard-2
      disk_type: pd-standard
      disk_size_gb: 50

- group: cluster2
  modules:
  - source: community/modules/compute/schedmd-slurm-gcp-v5-partition
    kind: terraform
    id: c2partition_0
    use: [c2partition_0-group, hpc_network]
    settings:
      partition_name: c2batch
      enable_placement: False
      exclusive: False

  - source: community/modules/compute/schedmd-slurm-gcp-v5-node-group
    id: c2partition_0-group
    settings:
      enable_smt: False
      machine_type: c2-standard-60
      node_count_dynamic_max: 4
      node_count_static: 0
      disk_size_gb: 50
      disk_type: pd-standard

  - source: ./community/modules/database/slurm-cloudsql-federation
    kind: terraform
    id: c2slurm-sql
    use: [hpc_network, ps-connect]
    settings:
      sql_instance_name: sql-cluster-2
      tier: "db-g1-small"

  - source: community/modules/scheduler/schedmd-slurm-gcp-v5-controller
    kind: terraform
    id: c2slurm_controller
    use: [hpc_network, c2partition_0, c2slurm-sql]
    settings:
      cloud_parameters:
        resume_rate: 0
        resume_timeout: 500
        suspend_rate: 0
        suspend_timeout: 300
        no_comma_params: false
      machine_type: n2-standard-2
      disk_type: pd-standard
      disk_size_gb: 50
      slurm_cluster_name: cluster2

  - source: community/modules/scheduler/schedmd-slurm-gcp-v5-login
    kind: terraform
    id: c2slurm_login
    use: [c2slurm_controller, hpc_network]
    settings:
      num_instances: 1
      machine_type: n2-standard-2
      disk_type: pd-standard
      disk_size_gb: 50

Please note that the CloudSQL Federation does not really need to receive ps-connect as it is only used to inform the terraform dependency graph and prevent the attempt to create the Cloud SQL instance before private access is configured.

I would invite you to try this out and tell me if this fix the problem.

cboneti commented 5 months ago

Folks, this was now merged to develop. I will close this for now.