hashicorp / terraform-provider-google

Terraform Provider for Google Cloud Platform
https://registry.terraform.io/providers/hashicorp/google/latest/docs
Mozilla Public License 2.0
2.25k stars 1.7k forks source link

GKE dataproc cluster | Workload Identity | getAccessToken permission error #18522

Open Prudhvi0717 opened 1 week ago

Prudhvi0717 commented 1 week ago

@edwardmedia regarding your previous reply in another #13714

Ref to this code you posted

data "google_project" "project" {
  project_id = "myproject"
}

resource "google_container_cluster" "primary" {
  name     = "issue13714-gke"
  location = "us-central1-a"

  initial_node_count = 1

  workload_identity_config {
    workload_pool = "${data.google_project.project.project_id}.svc.id.goog"
  }
}

resource "google_project_iam_binding" "workloadidentity" {
  project = "myproject"
  role    = "roles/iam.workloadIdentityUser"

  members = [
    "serviceAccount:${data.google_project.project.project_id}.svc.id.goog[issue13714-dproc/agent]",
    "serviceAccount:${data.google_project.project.project_id}.svc.id.goog[issue13714-dproc/spark-driver]",
    "serviceAccount:${data.google_project.project.project_id}.svc.id.goog[issue13714-dproc/spark-executor]",
  ]
}

resource "google_dataproc_cluster" "virtual_cluster" {
    depends_on = [
      google_project_iam_binding.workloadidentity
    ]

    name    = "issue13714-dproc"
    region  = "us-central1"

    virtual_cluster_config {
      kubernetes_cluster_config {
        kubernetes_namespace = "issue13714-dproc"
        kubernetes_software_config {
          component_version = {
            "SPARK": "3.1-dataproc-7",
          }
        }
        gke_cluster_config {
          gke_cluster_target = google_container_cluster.primary.id
          node_pool_target {
            node_pool = "issue13714-gke-np"
            roles = [
              "DEFAULT"
            ]
          }
        } 
      }
    }
  }

Originally posted by @edwardmedia in https://github.com/hashicorp/terraform-provider-google/issues/13714#issuecomment-1435760962

Prudhvi0717 commented 1 week ago

Hey @edwardmedia why do we need to provided project level IAM binding to the workload identities.

Why cant we just provided necessary access to a service account and create a iam binding to that sa

locals {
  ksas                   = ["spark-executor", "spark-driver", "agent"]
  workload_identity_role = "roles/iam.workloadIdentityUser"
  workload_member        = "serviceAccount:${var.project}.svc.id.goog"
}

/* creating workload identity mapping for service account and ksa's.
By default dataproc uses compute engine default service account. */
resource "google_service_account_iam_member" "gcp_sa_iam_member" {
  count              = length(local.ksas)
  service_account_id = var.service_account_id
  role               = local.workload_identity_role
  member             = "${local.workload_member}[${var.cluster_name}/${local.ksas[count.index]}]"
}

resource "google_dataproc_cluster" "cluster" {
  name                          = var.cluster_name
  region                        = var.region
  graceful_decommission_timeout = "120s"

  virtual_cluster_config {

    staging_bucket = var.staging_bucket

    kubernetes_cluster_config {
      kubernetes_namespace = var.cluster_name

      kubernetes_software_config {
        component_version = {
          "SPARK" : var.spark_version
        }
      }

      gke_cluster_config {
        gke_cluster_target = var.kube_cluster_id

        node_pool_target {
          node_pool = var.default_node_pool.node_pool_name
          roles     = var.default_node_pool.roles

          dynamic "node_pool_config" {
            for_each = var.default_node_pool.reuse_existing ? [] : [1]

            content {
              locations = var.node_locations

              autoscaling {
                min_node_count = var.default_node_pool.min_node_count
                max_node_count = var.default_node_pool.max_node_count
              }

              config {
                machine_type    = var.default_node_pool.machine_type
                preemptible     = var.default_node_pool.preemptible
                local_ssd_count = var.default_node_pool.local_ssd_count
              }
            }
          }
        }

        node_pool_target {
          node_pool = var.worker_node_pool.node_pool_name
          roles     = var.worker_node_pool.roles

          dynamic "node_pool_config" {
            for_each = var.worker_node_pool.reuse_existing ? [] : [1]

            content {
              locations = var.node_locations

              autoscaling {
                min_node_count = var.worker_node_pool.min_node_count
                max_node_count = var.worker_node_pool.max_node_count
              }

              config {
                machine_type = var.worker_node_pool.machine_type
                preemptible  = var.worker_node_pool.preemptible
                # spot            = var.worker_node_pool.preemptible
                local_ssd_count = var.worker_node_pool.local_ssd_count
              }
            }
          }
        }
      }
    }
  }
}
Prudhvi0717 commented 1 week ago

If I provide workload identity bindings to a service account with all required permissions other than compute engine default service account. I am getting following errror:

{"severity":"error","ts":"2024-06-23T08:48:04.915Z","logger":"setup","caller":"log/deleg.go:144","message":"could not initialize control client",

"error":"registering agent: registering agent: rpc error: code = Unauthenticated desc = transport: compute: Received 403 `Unable to generate access token; IAM returned 403 Forbidden: Permission 'iam.serviceAccounts.getAccessToken' denied on resource (or it may not exist).

This error could be caused by a missing IAM policy binding on the target IAM service account.\nFor more information, refer to the Workload Identity documentation:

https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#authenticating_to\n\n`","stacktrace":"sigs.k8s.io/controller-runtime/pkg/log.(*DelegatingLogger).Error\n\tsigs.k8s.io/controller-runtime@v0.9.2/pkg/log/deleg.go:144\nmain.main\n\tdataproc.googleapis.com/dpk8s/agent/cmd/agent/agent.go:35\nruntime.main\n\truntime/proc.go:250"}
ggtisc commented 1 week ago

Hi @Prudhvi0717!

Please answer the next questions:

  1. Are you facing the same issue? If not please share your own description
  2. Do you have the same code? If not please share your own code
  3. Are you having the same output? If not please share your own output
  4. What are the steps you followed to trigger this issue?
  5. What is the terraform version and Google provider version you are using? both versions please, and specify if you are using google-beta