databricks / terraform-provider-databricks

Databricks Terraform Provider
https://registry.terraform.io/providers/databricks/databricks/latest
Other
445 stars 384 forks source link

[ISSUE] Recreating VPC for workspace fails on apply #732

Closed steve148 closed 3 years ago

steve148 commented 3 years ago

Terraform Version

Terraform version 0.140 Provider version 0.34.0

Affected Resource(s)

Please list the resources as a list, for example:

Environment variable names

n/a

Terraform Configuration Files

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "2.70.0"

  name = local.prefix
  cidr = var.cidr_block
  azs  = data.aws_availability_zones.available.names
  tags = var.tags

  enable_dns_hostnames = true
  enable_nat_gateway   = true
  create_igw           = true

  public_subnets  = var.public_subnets
  private_subnets = var.private_subnets

  default_security_group_egress = [{
    cidr_blocks = "0.0.0.0/0"
  }]

  default_security_group_ingress = [{
    description = "Allow all internal TCP and UDP"
    self        = true
  }]
}

# Register VPC and its components with databricks.
resource "databricks_mws_networks" "this" {
  provider           = databricks.mws
  account_id         = var.databricks_account_id
  network_name       = "${local.prefix}-network"
  security_group_ids = [module.vpc.default_security_group_id]
  subnet_ids         = module.vpc.private_subnets
  vpc_id             = module.vpc.vpc_id
}

resource "databricks_mws_workspaces" "this" {
  provider   = databricks.mws
  account_id = var.databricks_account_id
  aws_region = var.region

  # Name the workspace and its deploy
  workspace_name  = local.prefix
  deployment_name = local.prefix

  credentials_id                           = databricks_mws_credentials.this.credentials_id
  storage_configuration_id                 = databricks_mws_storage_configurations.this.storage_configuration_id
  network_id                               = databricks_mws_networks.this.network_id
  managed_services_customer_managed_key_id = databricks_mws_customer_managed_keys.this.customer_managed_key_id
}

Debug Output

n/a

Panic Output

n/a

Expected Behavior

The end goal was to change the CIDR block for the VPC. The plan showed the following for databricks related resources (minus the specific IDs).

  # databricks_mws_networks.this must be replaced
-/+ resource "databricks_mws_networks" "this" {
      ~ creation_time      = 1616179359287 -> (known after apply)
      ~ id                 = "4/c" -> (known after apply)
      ~ network_id         = "C" -> (known after apply)
      ~ security_group_ids = [
          - "sg-0",
        ] -> (known after apply) # forces replacement
      ~ subnet_ids         = [
          - "subnet-a",
          - "subnet-b",
          - "subnet-c
          - "subnet-d
          - "subnet-e
          - "subnet-F",
        ] -> (known after apply) # forces replacement
      ~ vpc_id             = "vpc-0" -> (known after apply) # forces replacement
      ~ vpc_status         = "VALID" -> (known after apply)
      ~ workspace_id       = 3 -> (known after apply)
        # (2 unchanged attributes hidden)

      + error_messages {
          + error_message = (known after apply)
          + error_type    = (known after apply)
        }

      + vpc_endpoints {
          + dataplane_relay = (known after apply)
          + rest_api        = (known after apply)
        }
    }

  # databricks_mws_workspaces.this will be updated in-place
  ~ resource "databricks_mws_workspaces" "this" {
        id                                       = "4/3"
      ~ network_id                               = "c" -> (known after apply)
        # (14 unchanged attributes hidden)
    }

Ideally, the new VPC and its subcomponents would have been created first, registered with the databricks workspace as the new network configuration, and then the old VPC and sub-components would have been cleaned up.

Actual Behavior

When running the plan, it failed with the following error.

Error: INVALID_STATE: Unable to delete, Network is being used by active workspace 3612852022183645

I later found out this is because a workspace can have it's network configuration updated but not deleted (while the workspace is active).

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

Important Factoids

nfx commented 3 years ago

@steve148 Please do workspace update in multiple steps. E.g. create a second network, commit, apply. Then in the next commit/apply change the network. Then in the third commit/apply remove the older network.

This is currently a limitation of the platform. Would you also be able to PR in the documentation bit about it? :)

nfx commented 3 years ago

There's also somewhat related bug #649

nfx commented 3 years ago

@steve148 i've raised this issue internally. meanwhile, if you want to do an update, please test out update changes from #734, as current update is broken.