databricks / terraform-provider-databricks

Databricks Terraform Provider
https://registry.terraform.io/providers/databricks/databricks/latest
Other
456 stars 393 forks source link

[ISSUE] Issue with `databricks_group` data #2058

Open niraj8241 opened 1 year ago

niraj8241 commented 1 year ago

Configuration

Source Module

data "databricks_group" "workspace_groups" {
  depends_on   = [databricks_service_principal.sp]
  for_each     = toset(var.workspace_groups)
  display_name = each.value
}

data "databricks_group" "global_groups" {
  depends_on   = [databricks_service_principal.sp]
  provider     = databricks.mws
  for_each     = toset(var.global_groups)
  display_name = each.value
}

#---------------------------
# CREATE SERVICE PRINCIPAL
#---------------------------
resource "databricks_service_principal" "sp" {
  display_name               = "niraj_test1"
  allow_cluster_create       = true
  allow_instance_pool_create = false
  databricks_sql_access      = true
}

#------------------------------------------
# ADD SERVICE PRINCIPAL TO WORKSPACE GROUP
#------------------------------------------
resource "databricks_group_member" "workspace_group_members" {
  for_each  = data.databricks_group.workspace_groups
  group_id  = each.value.id # This expects the group to exist before we add.
  member_id = databricks_service_principal.sp.id
}

#-----------------------------------------------
# ADD SERVICE PRINCIPAL TO ACCOUNT/GLOBAL GROUP
#-----------------------------------------------
resource "databricks_group_member" "global_group_members" {
  for_each  = data.databricks_group.global_groups
  provider  = databricks.mws
  group_id  = each.value.id
  member_id = databricks_service_principal.sp.id
}

Child Module

resource "databricks_group" "global_group" {
  provider                   = databricks.mws
  display_name               = "runway-test-dev"
  allow_cluster_create       = true
  allow_instance_pool_create = false
  databricks_sql_access      = true
}

resource "databricks_group" "workspace_group" {
  display_name               = "runway-test-dev"
  allow_cluster_create       = true
  allow_instance_pool_create = false
  databricks_sql_access      = true
}

module "test" {
  source           = "../tf_test"
  workspace_groups = ["runway-test-dev"]
  global_groups    = ["runway-test-dev"]
}

Provider Configuration

terraform {
  required_providers {
    databricks = {
      source  = "databricks/databricks"
      version = ">=1.9"
    }
  }
}

provider "databricks" {
  alias      = "mws"
  host       = "https://accounts.cloud.databricks.com"
  account_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}

Expected Behavior

### Actual Behavior ![image](https://user-images.githubusercontent.com/13007542/222309951-91161b15-e8e4-48f3-88bc-b4cf54adba91.png) ### Steps to Reproduce 1. `terraform apply` 2. `terraform destroy` ### Terraform and provider versions ```Terraform v1.2.7 on linux_amd64 ``` ### Debug Output ### Important Factoids When i run the terraform destroy with -refresh=false. The destroy works fine.
nkvuong commented 1 year ago

this is a hidden dependency issue. You have not specify any dependencies between databricks_group. global_group and module.test, so Terraform will delete those resources in parallel. Since the group was deleted first, the group membership will fail to delete.