databricks / terraform-provider-databricks

Databricks Terraform Provider
https://registry.terraform.io/providers/databricks/databricks/latest
Other
441 stars 379 forks source link

[ISSUE] Issue with `databricks_secret_scope` resource when databricks workspace is destroyed/recreated due to change in machine learning workspace id #2233

Open satyakrish opened 1 year ago

satyakrish commented 1 year ago

Configuration


terraform {
  required_providers {
    databricks = {
      source  = "databricks/databricks"
    }
  }
}

provider "databricks" {
  host                        = module.databricks.workspace_url
  azure_workspace_resource_id = module.databricks.id
}

module "mlw" {
  source = "git::https://xxx"
  depends_on = [
  ]
  name                = var.name
  namespace           = local.namespace
  resource_group_name = azurerm_resource_group.mlw_rg.name
  location            = var.location
  tags                = var.tags
  container_registry_id = azurerm_container_registry.acr.id
  diagnostic_settings   = var.diagnostic_settings
  key_vault_id          = module.keyvault.id
  workspace = {
    application_insights_id       = azurerm_application_insights.appi.id
    high_business_impact          = true
    public_network_access_enabled = true
    identities = [
      {
        type = "SystemAssigned"
      }
    ]
    role_based_access = local.role_based_access
  }
  network = {
    vnet_id            = module.dmz_vnet.vnet.id
    training_subnet_id = module.dmz_vnet.subnets.training.id
    plink_subnet_id    = module.dmz_vnet.subnets.plink.id
  }

  private_dns_zone = {
    resource_group_name = var.private_dns_zone.resource_group_name
  }

  storage_account = {
    id                   = module.storage.id
    primary_access_key   = module.storage.primary_access_key
    storage_container_id = module.storage.container["blob-datastore"].resource_manager_id
  }
}

module "databricks" {
  depends_on               = [module.dmz_vnet]
  source              = "git::https://xxx"
  name                = var.databricks.name
  namespace           = var.namespace
  separator           = ""
  resource_group_name = azurerm_resource_group.mlw_rg.name
  location            = azurerm_resource_group.mlw_rg.location
  tags                = var.tags
  sku                 = "premium"
  custom_parameters = {
    machine_learning_workspace_id                        = module.mlw.id
    no_public_ip                                         = var.databricks.no_public_ip
    private_subnet_name                                  = module.dmz_vnet.subnets.dbricks_container.name
    private_subnet_network_security_group_association_id = module.dmz_vnet.subnets.dbricks_container.id
    public_subnet_name                                   = module.dmz_vnet.subnets.dbricks_host.name
    public_subnet_network_security_group_association_id  = module.dmz_vnet.subnets.dbricks_host.id
    virtual_network_id                                   = module.dmz_vnet.vnet.id
  }
}

resource "databricks_secret_scope" "ml" {
  depends_on               = [module.databricks, module.mlw]
  name                     = var.databricks.secret_scope_name
  initial_manage_principal = "users"
}

Expected Behavior

1- On running terraform apply for first time all the required resources are provisioned successfully (includes machine learning workspace, databricks workspace, databricks secret scope)

2- Now I wanted to change the name of the machine learning workspace resource. This would cause a destroy and recreate of machine learning workspace resource and also the databricks workspace resource.

3- Expectation, since the databricks secret scope resource is dependant on databricks workspace that should also be destroyed and recreated. So after changing the machine learning workspace name and running terraform plan, it should succeed and show that the following resources should be destroyed and recreated.

a. machine learning workspace b. databricks workspace. c. databricks secret scope

Actual Behavior

After changing machine learning workspace name, terraform plan fails stating

│ Error: cannot read secret scope: default auth: cannot configure default credentials │ │ with databricks_secret_scope.ml, │ on main.tf line 190, in resource "databricks_secret_scope" "ml": │ 190: resource "databricks_secret_scope" "ml" {

Steps to Reproduce

satyakrish commented 1 year ago

can somebody look into this one as it is really affecting our deployment and we need a fix/workaround for this.

shinhf commented 1 year ago

I have similar issue. My workaround was destroying secret scope manually (using terraform destroy) and then recreating it with new branch.