Azure-Samples / databricks-observability

OpenTelemetry Demo with Azure Databricks and Azure Monitor
MIT License
16 stars 5 forks source link

Running terraform apply twice required for successful deployment #5

Open mariekekortsmit opened 10 months ago

mariekekortsmit commented 10 months ago

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Setting up the resources from scratch results in an error for the terraform apply in the first round:

az login
terraform init
terraform plan
terraform apply

Any log messages given by the failure

makortsm@DESKTOP-KJ383PJ:~/databricks-observability/$ terraform apply
module.keyvault.data.azurerm_client_config.current: Reading...
module.keyvault.data.azurerm_client_config.current: Read complete after 0s [id=Y2xpZW50Q29uZmlncy9jbGllbnRJZD0wNGIwNzc5NS04ZGRiLTQ2MWEtYmJlZS0wMmY5ZTFiZjdiNDY7b2JqZWN0SWQ9MDFhNDlkMWYtYTQxNi00ODgxLTk1NzUtNWE3ODNlNzk4Nzc3O3N1YnNjcmlwdGlvbklkPWM5MGE4ZWM1LWE5YTYtNGFjYi05NzFkLTY2NzZmM2M3ZThkMjt0ZW5hbnRJZD0xNmIzYzAxMy1kMzAwLTQ2OGQtYWM2NC03ZWRhMDgyMGI2ZDM=]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:

  # random_id.storage_account will be created
  + resource "random_id" "storage_account" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 8
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

  # module.app-insights.azurerm_application_insights.default will be created
  + resource "azurerm_application_insights" "default" {
      + app_id                                = (known after apply)
      + application_type                      = "other"
      + connection_string                     = (sensitive value)
      + daily_data_cap_in_gb                  = (known after apply)
      + daily_data_cap_notifications_disabled = (known after apply)
      + disable_ip_masking                    = false
      + force_customer_storage_for_profiler   = false
      + id                                    = (known after apply)
      + instrumentation_key                   = (sensitive value)
      + internet_ingestion_enabled            = true
      + internet_query_enabled                = true
      + local_authentication_disabled         = false
      + location                              = "eastus"
      + name                                  = (known after apply)
      + resource_group_name                   = "rg-dbobs"
      + retention_in_days                     = 90
      + sampling_percentage                   = 100
      + workspace_id                          = (known after apply)
    }

  # module.app-insights.azurerm_log_analytics_workspace.default will be created
  + resource "azurerm_log_analytics_workspace" "default" {
      + allow_resource_only_permissions    = true
      + daily_quota_gb                     = -1
      + id                                 = (known after apply)
      + internet_ingestion_enabled         = true
      + internet_query_enabled             = true
      + local_authentication_disabled      = false
      + location                           = "eastus"
      + name                               = (known after apply)
      + primary_shared_key                 = (sensitive value)
      + reservation_capacity_in_gb_per_day = (known after apply)
      + resource_group_name                = "rg-dbobs"
      + retention_in_days                  = 30
      + secondary_shared_key               = (sensitive value)
      + sku                                = "PerGB2018"
      + workspace_id                       = (known after apply)
    }

  # module.databricks.data.azurerm_key_vault_secret.db-pw will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "db-pw" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "sql-server-password"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # module.databricks.data.databricks_spark_version.latest_lts will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "databricks_spark_version" "latest_lts" {
      + id                = (known after apply)
      + long_term_support = true
    }

  # module.databricks.azurerm_databricks_workspace.adb will be created
  + resource "azurerm_databricks_workspace" "adb" {
      + customer_managed_key_enabled          = false
      + disk_encryption_set_id                = (known after apply)
      + id                                    = (known after apply)
      + infrastructure_encryption_enabled     = false
      + location                              = "eastus"
      + managed_disk_identity                 = (known after apply)
      + managed_resource_group_id             = (known after apply)
      + managed_resource_group_name           = (known after apply)
      + name                                  = (known after apply)
      + network_security_group_rules_required = (known after apply)
      + public_network_access_enabled         = true
      + resource_group_name                   = "rg-dbobs"
      + sku                                   = "standard"
      + storage_account_identity              = (known after apply)
      + workspace_id                          = (known after apply)
      + workspace_url                         = (known after apply)
    }

  # module.databricks.azurerm_storage_account.dbstorage will be created
  + resource "azurerm_storage_account" "dbstorage" {
      + access_tier                       = (known after apply)
      + account_kind                      = "StorageV2"
      + account_replication_type          = "LRS"
      + account_tier                      = "Standard"
      + allow_nested_items_to_be_public   = true
      + cross_tenant_replication_enabled  = true
      + default_to_oauth_authentication   = false
      + enable_https_traffic_only         = true
      + id                                = (known after apply)
      + infrastructure_encryption_enabled = false
      + is_hns_enabled                    = false
      + large_file_share_enabled          = (known after apply)
      + location                          = "eastus"
      + min_tls_version                   = "TLS1_2"
      + name                              = (known after apply)
      + nfsv3_enabled                     = false
      + primary_access_key                = (sensitive value)
      + primary_blob_connection_string    = (sensitive value)
      + primary_blob_endpoint             = (known after apply)
      + primary_blob_host                 = (known after apply)
      + primary_connection_string         = (sensitive value)
      + primary_dfs_endpoint              = (known after apply)
      + primary_dfs_host                  = (known after apply)
      + primary_file_endpoint             = (known after apply)
      + primary_file_host                 = (known after apply)
      + primary_location                  = (known after apply)
      + primary_queue_endpoint            = (known after apply)
      + primary_queue_host                = (known after apply)
      + primary_table_endpoint            = (known after apply)
      + primary_table_host                = (known after apply)
      + primary_web_endpoint              = (known after apply)
      + primary_web_host                  = (known after apply)
      + public_network_access_enabled     = true
      + queue_encryption_key_type         = "Service"
      + resource_group_name               = "rg-dbobs"
      + secondary_access_key              = (sensitive value)
      + secondary_blob_connection_string  = (sensitive value)
      + secondary_blob_endpoint           = (known after apply)
      + secondary_blob_host               = (known after apply)
      + secondary_connection_string       = (sensitive value)
      + secondary_dfs_endpoint            = (known after apply)
      + secondary_dfs_host                = (known after apply)
      + secondary_file_endpoint           = (known after apply)
      + secondary_file_host               = (known after apply)
      + secondary_location                = (known after apply)
      + secondary_queue_endpoint          = (known after apply)
      + secondary_queue_host              = (known after apply)
      + secondary_table_endpoint          = (known after apply)
      + secondary_table_host              = (known after apply)
      + secondary_web_endpoint            = (known after apply)
      + secondary_web_host                = (known after apply)
      + sftp_enabled                      = false
      + shared_access_key_enabled         = true
      + table_encryption_key_type         = "Service"
    }

  # module.databricks.azurerm_storage_container.dbstorage will be created
  + resource "azurerm_storage_container" "dbstorage" {
      + container_access_type   = "private"
      + has_immutability_policy = (known after apply)
      + has_legal_hold          = (known after apply)
      + id                      = (known after apply)
      + metadata                = (known after apply)
      + name                    = "data"
      + resource_manager_id     = (known after apply)
      + storage_account_name    = (known after apply)
    }

  # module.databricks.databricks_cluster.default will be created
  + resource "databricks_cluster" "default" {
      + autotermination_minutes      = 20
      + cluster_id                   = (known after apply)
      + cluster_name                 = "demo-cluster"
      + default_tags                 = (known after apply)
      + driver_instance_pool_id      = (known after apply)
      + driver_node_type_id          = (known after apply)
      + enable_elastic_disk          = (known after apply)
      + enable_local_disk_encryption = (known after apply)
      + id                           = (known after apply)
      + node_type_id                 = "Standard_DS3_v2"
      + num_workers                  = 0
      + spark_conf                   = (known after apply)
      + spark_env_vars               = (known after apply)
      + spark_version                = (known after apply)
      + state                        = (known after apply)
      + url                          = (known after apply)

      + autoscale {
          + max_workers = 3
          + min_workers = 2
        }

      + cluster_log_conf {
          + dbfs {
              + destination = "dbfs:/cluster-logs"
            }
        }

      + init_scripts {
          + dbfs {
              + destination = (known after apply)
            }
        }
    }

  # module.databricks.databricks_dbfs_file.agent will be created
  + resource "databricks_dbfs_file" "agent" {
      + dbfs_path = (known after apply)
      + file_size = (known after apply)
      + id        = (known after apply)
      + md5       = "different"
      + path      = "/observability/applicationinsights-agent.jar"
      + source    = "applicationinsights-agent.jar"
    }

  # module.databricks.databricks_dbfs_file.applicationinsights-driver-json will be created
  + resource "databricks_dbfs_file" "applicationinsights-driver-json" {
      + dbfs_path = (known after apply)
      + file_size = (known after apply)
      + id        = (known after apply)
      + md5       = "different"
      + path      = "/observability/applicationinsights-driver.json"
      + source    = "modules/databricks/applicationinsights-driver.json"
    }

  # module.databricks.databricks_dbfs_file.applicationinsights-executor-json will be created
  + resource "databricks_dbfs_file" "applicationinsights-executor-json" {
      + dbfs_path = (known after apply)
      + file_size = (known after apply)
      + id        = (known after apply)
      + md5       = "different"
      + path      = "/observability/applicationinsights-executor.json"
      + source    = "modules/databricks/applicationinsights-executor.json"
    }

  # module.databricks.databricks_dbfs_file.init-observability will be created
  + resource "databricks_dbfs_file" "init-observability" {
      + dbfs_path = (known after apply)
      + file_size = (known after apply)
      + id        = (known after apply)
      + md5       = "different"
      + path      = "/observability/init-observability.sh"
      + source    = "modules/databricks/init-observability.sh"
    }

  # module.databricks.databricks_dbfs_file.log4j2-properties will be created
  + resource "databricks_dbfs_file" "log4j2-properties" {
      + dbfs_path = (known after apply)
      + file_size = (known after apply)
      + id        = (known after apply)
      + md5       = "different"
      + path      = "/observability/log4j2.properties"
      + source    = "modules/databricks/log4j2.properties"
    }

  # module.databricks.databricks_library.opentelemetry will be created
  + resource "databricks_library" "opentelemetry" {
      + cluster_id = (known after apply)
      + id         = (known after apply)

      + pypi {
          + package = "azure-monitor-opentelemetry~=1.0.0"
        }
    }

  # module.databricks.databricks_notebook.sample-telemetry-notebook will be created
  + resource "databricks_notebook" "sample-telemetry-notebook" {
      + format      = "SOURCE"
      + id          = (known after apply)
      + md5         = "different"
      + object_id   = (known after apply)
      + object_type = (known after apply)
      + path        = "/Shared/sample-telemetry-notebook"
      + source      = "modules/databricks/notebooks/sample-telemetry-notebook.py"
      + url         = (known after apply)
    }

  # module.databricks.databricks_notebook.telemetry-helper will be created
  + resource "databricks_notebook" "telemetry-helper" {
      + format      = "SOURCE"
      + id          = (known after apply)
      + md5         = "different"
      + object_id   = (known after apply)
      + object_type = (known after apply)
      + path        = "/Shared/telemetry-helper"
      + source      = "modules/databricks/notebooks/telemetry-helper.py"
      + url         = (known after apply)
    }

  # module.databricks.databricks_secret.app-insights-connection-string will be created
  + resource "databricks_secret" "app-insights-connection-string" {
      + config_reference       = (known after apply)
      + id                     = (known after apply)
      + key                    = "app-insights-connection-string"
      + last_updated_timestamp = (known after apply)
      + scope                  = "terraform-demo-scope"
      + string_value           = (sensitive value)
    }

  # module.databricks.databricks_secret.metastore-password will be created
  + resource "databricks_secret" "metastore-password" {
      + config_reference       = (known after apply)
      + id                     = (known after apply)
      + key                    = "metastore-password"
      + last_updated_timestamp = (known after apply)
      + scope                  = "terraform-demo-scope"
      + string_value           = (sensitive value)
    }

  # module.databricks.databricks_secret.storage-key will be created
  + resource "databricks_secret" "storage-key" {
      + config_reference       = (known after apply)
      + id                     = (known after apply)
      + key                    = (known after apply)
      + last_updated_timestamp = (known after apply)
      + scope                  = "terraform-demo-scope"
      + string_value           = (sensitive value)
    }

  # module.databricks.databricks_secret_scope.default will be created
  + resource "databricks_secret_scope" "default" {
      + backend_type             = (known after apply)
      + id                       = (known after apply)
      + initial_manage_principal = "users"
      + name                     = "terraform-demo-scope"
    }

  # module.keyvault.azurerm_key_vault.adb_kv will be created
  + resource "azurerm_key_vault" "adb_kv" {
      + access_policy                 = [
          + {
              + object_id          = "01a49d1f-a416-4881-9575-5a783e798777"
              + secret_permissions = [
                  + "Get",
                  + "List",
                  + "Set",
                  + "Delete",
                  + "Recover",
                  + "Purge",
                ]
              + tenant_id          = "16b3c013-d300-468d-ac64-7eda0820b6d3"
            },
        ]
      + id                            = (known after apply)
      + location                      = "eastus"
      + name                          = (known after apply)
      + public_network_access_enabled = true
      + purge_protection_enabled      = false
      + resource_group_name           = "rg-dbobs"
      + sku_name                      = "standard"
      + soft_delete_retention_days    = 7
      + tenant_id                     = "16b3c013-d300-468d-ac64-7eda0820b6d3"
      + vault_uri                     = (known after apply)
    }

  # module.rg.azurerm_resource_group.rg will be created
  + resource "azurerm_resource_group" "rg" {
      + id       = (known after apply)
      + location = "eastus"
      + name     = "rg-dbobs"
    }

  # module.sql-database.azurerm_key_vault_secret.db_pw will be created
  + resource "azurerm_key_vault_secret" "db_pw" {
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "sql-server-password"
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + value                   = (sensitive value)
      + version                 = (known after apply)
      + versionless_id          = (known after apply)
    }

  # module.sql-database.azurerm_mssql_database.sql-db will be created
  + resource "azurerm_mssql_database" "sql-db" {
      + auto_pause_delay_in_minutes         = (known after apply)
      + collation                           = (known after apply)
      + create_mode                         = "Default"
      + creation_source_database_id         = (known after apply)
      + geo_backup_enabled                  = true
      + id                                  = (known after apply)
      + ledger_enabled                      = (known after apply)
      + license_type                        = (known after apply)
      + maintenance_configuration_name      = (known after apply)
      + max_size_gb                         = (known after apply)
      + min_capacity                        = (known after apply)
      + name                                = "metastoredb"
      + read_replica_count                  = (known after apply)
      + read_scale                          = (known after apply)
      + restore_point_in_time               = (known after apply)
      + sample_name                         = (known after apply)
      + server_id                           = (known after apply)
      + sku_name                            = "Basic"
      + storage_account_type                = "Geo"
      + transparent_data_encryption_enabled = true
      + zone_redundant                      = (known after apply)
    }

  # module.sql-database.azurerm_mssql_firewall_rule.azure-services will be created
  + resource "azurerm_mssql_firewall_rule" "azure-services" {
      + end_ip_address   = "0.0.0.0"
      + id               = (known after apply)
      + name             = "Allow"
      + server_id        = (known after apply)
      + start_ip_address = "0.0.0.0"
    }

  # module.sql-database.azurerm_mssql_server.sql-server will be created
  + resource "azurerm_mssql_server" "sql-server" {
      + administrator_login                  = (known after apply)
      + administrator_login_password         = (sensitive value)
      + connection_policy                    = "Default"
      + fully_qualified_domain_name          = (known after apply)
      + id                                   = (known after apply)
      + location                             = "eastus"
      + minimum_tls_version                  = "1.2"
      + name                                 = (known after apply)
      + outbound_network_restriction_enabled = false
      + primary_user_assigned_identity_id    = (known after apply)
      + public_network_access_enabled        = true
      + resource_group_name                  = "rg-dbobs"
      + restorable_dropped_database_ids      = (known after apply)
      + version                              = "12.0"
    }

  # module.sql-database.random_id.username will be created
  + resource "random_id" "username" {
      + b64_std     = (known after apply)
      + b64_url     = (known after apply)
      + byte_length = 6
      + dec         = (known after apply)
      + hex         = (known after apply)
      + id          = (known after apply)
    }

  # module.sql-database.random_password.password will be created
  + resource "random_password" "password" {
      + bcrypt_hash = (sensitive value)
      + id          = (known after apply)
      + length      = 30
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = true
      + numeric     = true
      + result      = (sensitive value)
      + special     = true
      + upper       = true
    }

  # module.databricks.module.periodic-job.databricks_job.main will be created
  + resource "databricks_job" "main" {
      + always_running      = false
      + format              = (known after apply)
      + id                  = (known after apply)
      + max_concurrent_runs = 1
      + name                = "Periodic job"
      + url                 = (known after apply)

      + schedule {
          + pause_status           = (known after apply)
          + quartz_cron_expression = "0 * * * * ?"
          + timezone_id            = "UTC"
        }

      + task {
          + existing_cluster_id = (known after apply)
          + retry_on_timeout    = (known after apply)
          + task_key            = "a"

          + notebook_task {
              + notebook_path = "/Shared/sample-notebook"
            }
        }
    }

  # module.databricks.module.periodic-job.databricks_notebook.main will be created
  + resource "databricks_notebook" "main" {
      + format      = "SOURCE"
      + id          = (known after apply)
      + md5         = "different"
      + object_id   = (known after apply)
      + object_type = (known after apply)
      + path        = "/Shared/sample-notebook"
      + source      = "modules/databricks/notebook-job/../notebooks/sample-notebook.py"
      + url         = (known after apply)
    }

  # module.databricks.module.streaming-job.databricks_job.main will be created
  + resource "databricks_job" "main" {
      + always_running      = false
      + format              = (known after apply)
      + id                  = (known after apply)
      + max_concurrent_runs = 1
      + name                = "Streaming job"
      + url                 = (known after apply)

      + schedule {
          + pause_status           = (known after apply)
          + quartz_cron_expression = "0 * * * * ?"
          + timezone_id            = "UTC"
        }

      + task {
          + existing_cluster_id = (known after apply)
          + retry_on_timeout    = (known after apply)
          + task_key            = "a"

          + notebook_task {
              + notebook_path = "/Shared/sample-streaming-notebook"
            }
        }
    }

  # module.databricks.module.streaming-job.databricks_notebook.main will be created
  + resource "databricks_notebook" "main" {
      + format      = "SOURCE"
      + id          = (known after apply)
      + md5         = "different"
      + object_id   = (known after apply)
      + object_type = (known after apply)
      + path        = "/Shared/sample-streaming-notebook"
      + source      = "modules/databricks/notebook-job/../notebooks/sample-streaming-notebook.py"
      + url         = (known after apply)
    }

  # module.databricks.module.telemetry-job.databricks_job.main will be created
  + resource "databricks_job" "main" {
      + always_running      = false
      + format              = (known after apply)
      + id                  = (known after apply)
      + max_concurrent_runs = 1
      + name                = "Telemetry job"
      + url                 = (known after apply)

      + schedule {
          + pause_status           = (known after apply)
          + quartz_cron_expression = "0 * * * * ?"
          + timezone_id            = "UTC"
        }

      + task {
          + existing_cluster_id = (known after apply)
          + retry_on_timeout    = (known after apply)
          + task_key            = "a"

          + notebook_task {
              + notebook_path = "/Shared/sample-telemetry-caller"
            }
        }
    }

  # module.databricks.module.telemetry-job.databricks_notebook.main will be created
  + resource "databricks_notebook" "main" {
      + format      = "SOURCE"
      + id          = (known after apply)
      + md5         = "different"
      + object_id   = (known after apply)
      + object_type = (known after apply)
      + path        = "/Shared/sample-telemetry-caller"
      + source      = "modules/databricks/notebook-job/../notebooks/sample-telemetry-caller.py"
      + url         = (known after apply)
    }

Plan: 33 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + resource_group_name = "rg-dbobs"

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

random_id.storage_account: Creating...
module.sql-database.random_id.username: Creating...
module.sql-database.random_id.username: Creation complete after 0s [id=NOZB1Nf5]
module.sql-database.random_password.password: Creating...
random_id.storage_account: Creation complete after 0s [id=vcC4BURp53Q]
module.sql-database.random_password.password: Creation complete after 0s [id=none]
module.rg.azurerm_resource_group.rg: Creating...
module.rg.azurerm_resource_group.rg: Creation complete after 2s [id=/subscriptions/c90a8ec5-a9a6-4acb-971d-6676f3c7e8d2/resourceGroups/rg-dbobs]
module.keyvault.azurerm_key_vault.adb_kv: Creating...
module.app-insights.azurerm_log_analytics_workspace.default: Creating...
module.sql-database.azurerm_mssql_server.sql-server: Creating...
module.databricks.azurerm_databricks_workspace.adb: Creating...
module.databricks.azurerm_storage_account.dbstorage: Creating...
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [10s elapsed]
module.sql-database.azurerm_mssql_server.sql-server: Still creating... [10s elapsed]
module.databricks.azurerm_databricks_workspace.adb: Still creating... [10s elapsed]
module.app-insights.azurerm_log_analytics_workspace.default: Still creating... [10s elapsed]
module.databricks.azurerm_storage_account.dbstorage: Still creating... [10s elapsed]
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [20s elapsed]
module.databricks.azurerm_databricks_workspace.adb: Still creating... [20s elapsed]
module.app-insights.azurerm_log_analytics_workspace.default: Still creating... [20s elapsed]
module.sql-database.azurerm_mssql_server.sql-server: Still creating... [20s elapsed]
module.databricks.azurerm_storage_account.dbstorage: Still creating... [20s elapsed]
module.databricks.azurerm_storage_account.dbstorage: Creation complete after 26s [id=/subscriptions/c90a8ec5-a9a6-4acb-971d-6676f3c7e8d2/resourceGroups/rg-dbobs/providers/Microsoft.Storage/storageAccounts/stdbobsbdc0b8054469e774]
module.databricks.azurerm_storage_container.dbstorage: Creating...
module.databricks.azurerm_storage_container.dbstorage: Creation complete after 0s [id=https://stdbobsbdc0b8054469e774.blob.core.windows.net/data]
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [30s elapsed]
module.sql-database.azurerm_mssql_server.sql-server: Still creating... [30s elapsed]
module.databricks.azurerm_databricks_workspace.adb: Still creating... [30s elapsed]
module.app-insights.azurerm_log_analytics_workspace.default: Still creating... [30s elapsed]
module.app-insights.azurerm_log_analytics_workspace.default: Creation complete after 36s [id=/subscriptions/c90a8ec5-a9a6-4acb-971d-6676f3c7e8d2/resourceGroups/rg-dbobs/providers/Microsoft.OperationalInsights/workspaces/la-dbobs-bdc0b8054469e774]
module.app-insights.azurerm_application_insights.default: Creating...
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [40s elapsed]
module.sql-database.azurerm_mssql_server.sql-server: Still creating... [40s elapsed]
module.databricks.azurerm_databricks_workspace.adb: Still creating... [40s elapsed]
module.app-insights.azurerm_application_insights.default: Still creating... [10s elapsed]
module.app-insights.azurerm_application_insights.default: Creation complete after 13s [id=/subscriptions/c90a8ec5-a9a6-4acb-971d-6676f3c7e8d2/resourceGroups/rg-dbobs/providers/Microsoft.Insights/components/appi-dbobs-bdc0b8054469e774]
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [50s elapsed]
module.sql-database.azurerm_mssql_server.sql-server: Still creating... [50s elapsed]
module.databricks.azurerm_databricks_workspace.adb: Still creating... [50s elapsed]
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [1m0s elapsed]
module.databricks.azurerm_databricks_workspace.adb: Still creating... [1m0s elapsed]
module.sql-database.azurerm_mssql_server.sql-server: Still creating... [1m0s elapsed]
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [1m10s elapsed]
module.sql-database.azurerm_mssql_server.sql-server: Still creating... [1m10s elapsed]
module.databricks.azurerm_databricks_workspace.adb: Still creating... [1m10s elapsed]
module.sql-database.azurerm_mssql_server.sql-server: Creation complete after 1m16s [id=/subscriptions/c90a8ec5-a9a6-4acb-971d-6676f3c7e8d2/resourceGroups/rg-dbobs/providers/Microsoft.Sql/servers/sqlserver-dbobs-bdc0b8054469e774]
module.sql-database.azurerm_mssql_firewall_rule.azure-services: Creating...
module.sql-database.azurerm_mssql_database.sql-db: Creating...
module.sql-database.azurerm_mssql_firewall_rule.azure-services: Creation complete after 2s [id=/subscriptions/c90a8ec5-a9a6-4acb-971d-6676f3c7e8d2/resourceGroups/rg-dbobs/providers/Microsoft.Sql/servers/sqlserver-dbobs-bdc0b8054469e774/firewallRules/Allow]
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [1m20s elapsed]
module.databricks.azurerm_databricks_workspace.adb: Still creating... [1m20s elapsed]
module.sql-database.azurerm_mssql_database.sql-db: Still creating... [10s elapsed]
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [1m30s elapsed]
module.databricks.azurerm_databricks_workspace.adb: Still creating... [1m30s elapsed]
module.sql-database.azurerm_mssql_database.sql-db: Still creating... [20s elapsed]
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [1m40s elapsed]
module.databricks.azurerm_databricks_workspace.adb: Still creating... [1m40s elapsed]
module.sql-database.azurerm_mssql_database.sql-db: Still creating... [30s elapsed]
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [1m50s elapsed]
module.databricks.azurerm_databricks_workspace.adb: Still creating... [1m50s elapsed]
module.sql-database.azurerm_mssql_database.sql-db: Still creating... [40s elapsed]
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [2m0s elapsed]
module.databricks.azurerm_databricks_workspace.adb: Still creating... [2m0s elapsed]
module.sql-database.azurerm_mssql_database.sql-db: Still creating... [50s elapsed]
module.databricks.azurerm_databricks_workspace.adb: Creation complete after 2m9s [id=/subscriptions/c90a8ec5-a9a6-4acb-971d-6676f3c7e8d2/resourceGroups/rg-dbobs/providers/Microsoft.Databricks/workspaces/adb-dbobs-bdc0b8054469e774]
module.databricks.data.databricks_spark_version.latest_lts: Reading...
module.databricks.module.streaming-job.databricks_notebook.main: Creating...
module.databricks.databricks_dbfs_file.applicationinsights-executor-json: Creating...
module.databricks.databricks_secret_scope.default: Creating...
module.databricks.module.periodic-job.databricks_notebook.main: Creating...
module.databricks.databricks_dbfs_file.log4j2-properties: Creating...
module.databricks.databricks_notebook.sample-telemetry-notebook: Creating...
module.databricks.databricks_dbfs_file.init-observability: Creating...
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [2m10s elapsed]
module.databricks.databricks_dbfs_file.applicationinsights-driver-json: Creating...
module.databricks.databricks_notebook.telemetry-helper: Creating...
module.databricks.databricks_dbfs_file.agent: Creating...
module.sql-database.azurerm_mssql_database.sql-db: Still creating... [1m0s elapsed]
module.databricks.data.databricks_spark_version.latest_lts: Still reading... [10s elapsed]
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [2m20s elapsed]
module.sql-database.azurerm_mssql_database.sql-db: Still creating... [1m10s elapsed]
module.databricks.data.databricks_spark_version.latest_lts: Still reading... [20s elapsed]
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [2m30s elapsed]
module.sql-database.azurerm_mssql_database.sql-db: Still creating... [1m20s elapsed]
module.databricks.data.databricks_spark_version.latest_lts: Still reading... [30s elapsed]
module.keyvault.azurerm_key_vault.adb_kv: Still creating... [2m40s elapsed]
module.keyvault.azurerm_key_vault.adb_kv: Creation complete after 2m44s [id=/subscriptions/c90a8ec5-a9a6-4acb-971d-6676f3c7e8d2/resourceGroups/rg-dbobs/providers/Microsoft.KeyVault/vaults/kvdbobsbdc0b8054469e774]
module.sql-database.azurerm_key_vault_secret.db_pw: Creating...
module.sql-database.azurerm_key_vault_secret.db_pw: Creation complete after 1s [id=https://kvdbobsbdc0b8054469e774.vault.azure.net/secrets/sql-server-password/d977d8ccedef45319ea5a93300f3aca4]
module.databricks.data.azurerm_key_vault_secret.db-pw: Reading...
module.databricks.data.azurerm_key_vault_secret.db-pw: Read complete after 0s [id=https://kvdbobsbdc0b8054469e774.vault.azure.net/secrets/sql-server-password/d977d8ccedef45319ea5a93300f3aca4]
module.sql-database.azurerm_mssql_database.sql-db: Still creating... [1m30s elapsed]
module.databricks.data.databricks_spark_version.latest_lts: Still reading... [40s elapsed]
module.sql-database.azurerm_mssql_database.sql-db: Still creating... [1m40s elapsed]
module.databricks.data.databricks_spark_version.latest_lts: Still reading... [50s elapsed]
module.sql-database.azurerm_mssql_database.sql-db: Still creating... [1m50s elapsed]
module.databricks.data.databricks_spark_version.latest_lts: Still reading... [1m0s elapsed]
module.sql-database.azurerm_mssql_database.sql-db: Still creating... [2m0s elapsed]
module.databricks.data.databricks_spark_version.latest_lts: Still reading... [1m10s elapsed]
module.sql-database.azurerm_mssql_database.sql-db: Still creating... [2m10s elapsed]
module.databricks.data.databricks_spark_version.latest_lts: Still reading... [1m20s elapsed]
module.sql-database.azurerm_mssql_database.sql-db: Creation complete after 2m16s [id=/subscriptions/c90a8ec5-a9a6-4acb-971d-6676f3c7e8d2/resourceGroups/rg-dbobs/providers/Microsoft.Sql/servers/sqlserver-dbobs-bdc0b8054469e774/databases/metastoredb]
╷
│ Error: User not authorized
│ 
│   with module.databricks.data.databricks_spark_version.latest_lts,
│   on modules/databricks/main.tf line 9, in data "databricks_spark_version" "latest_lts":
│    9: data "databricks_spark_version" "latest_lts" {
│ 
╵
╷
│ Error: cannot create secret scope: User not authorized
│ 
│   with module.databricks.databricks_secret_scope.default,
│   on modules/databricks/main.tf line 41, in resource "databricks_secret_scope" "default":
│   41: resource "databricks_secret_scope" "default" {
│ 
╵
╷
│ Error: cannot create dbfs file: cannot create handle: User not authorized
│ 
│   with module.databricks.databricks_dbfs_file.log4j2-properties,
│   on modules/databricks/main.tf line 64, in resource "databricks_dbfs_file" "log4j2-properties":
│   64: resource "databricks_dbfs_file" "log4j2-properties" {
│ 
╵
╷
│ Error: cannot create dbfs file: cannot create handle: User not authorized
│ 
│   with module.databricks.databricks_dbfs_file.agent,
│   on modules/databricks/main.tf line 69, in resource "databricks_dbfs_file" "agent":
│   69: resource "databricks_dbfs_file" "agent" {
│ 
╵
╷
│ Error: cannot create dbfs file: cannot create handle: User not authorized
│ 
│   with module.databricks.databricks_dbfs_file.init-observability,
│   on modules/databricks/main.tf line 78, in resource "databricks_dbfs_file" "init-observability":
│   78: resource "databricks_dbfs_file" "init-observability" {
│ 
╵
╷
│ Error: cannot create dbfs file: cannot create handle: User not authorized
│ 
│   with module.databricks.databricks_dbfs_file.applicationinsights-driver-json,
│   on modules/databricks/main.tf line 83, in resource "databricks_dbfs_file" "applicationinsights-driver-json":
│   83: resource "databricks_dbfs_file" "applicationinsights-driver-json" {
│ 
╵
╷
│ Error: cannot create dbfs file: cannot create handle: User not authorized
│ 
│   with module.databricks.databricks_dbfs_file.applicationinsights-executor-json,
│   on modules/databricks/main.tf line 88, in resource "databricks_dbfs_file" "applicationinsights-executor-json":
│   88: resource "databricks_dbfs_file" "applicationinsights-executor-json" {
│ 
╵
╷
│ Error: cannot create notebook: User not authorized
│ 
│   with module.databricks.databricks_notebook.sample-telemetry-notebook,
│   on modules/databricks/main.tf line 185, in resource "databricks_notebook" "sample-telemetry-notebook":
│  185: resource "databricks_notebook" "sample-telemetry-notebook" {
│ 
╵
╷
│ Error: cannot create notebook: User not authorized
│ 
│   with module.databricks.databricks_notebook.telemetry-helper,
│   on modules/databricks/main.tf line 190, in resource "databricks_notebook" "telemetry-helper":
│  190: resource "databricks_notebook" "telemetry-helper" {
│ 
╵
╷
│ Error: cannot create notebook: User not authorized
│ 
│   with module.databricks.module.periodic-job.databricks_notebook.main,
│   on modules/databricks/notebook-job/main.tf line 9, in resource "databricks_notebook" "main":
│    9: resource "databricks_notebook" "main" {
│ 
╵
╷
│ Error: cannot create notebook: User not authorized
│ 
│   with module.databricks.module.streaming-job.databricks_notebook.main,
│   on modules/databricks/notebook-job/main.tf line 9, in resource "databricks_notebook" "main":
│    9: resource "databricks_notebook" "main" {
│ 

Expected/desired behavior

Desired behavior is not to have these failures in the first round, but have a successful deployment straight away. Running terraform apply once more results in a successful deployment after all, so consider adding this to a 'known issues' section.

OS and Version?

WSL Ubuntu-20.04

Versions

Mention any other details that might be useful

triplebeta commented 10 months ago

Exactly the same problem in the first run for me but 2nd run does not fix it though. If I try it multiple times I then keep getting this error:

... Plan: 14 to add, 0 to change, 0 to destroy. │ Error: User not authorized │ │ with module.databricks.data.databricks_spark_version.latest_lts, │ on modules\databricks\main.tf line 9, in data "databricks_spark_version" "latest_lts": │ 9: data "databricks_spark_version" "latest_lts" { │

I'm really looking forward to trying out this solution because it seems to exactly what I was looking for. However, since I'm not familiar with Terraform I'm kind of stuck here.

mariekekortsmit commented 10 months ago

@triplebeta, did you try waiting a bit before running it for the second time? It might be a race condition because we're deploying and authenticating to Databricks in the same terraform run.

triplebeta commented 10 months ago

Tried to run it 8 times, up to 20 minutes after the initial run. Same result each time.

I've installed terraform using the instructions from here, so it uses an SP with the Contributor role. Some resources (databricks, keyvault, sql, storage etc) show up properly in the resource group.

I enabled logging for terraform and this is what it shows:

`2023-11-28T13:58:17.201+0100 [WARN] Provider "registry.terraform.io/hashicorp/azurerm" produced an invalid plan for module.sql-database.azurerm_key_vault_secret.db_pw, but we are tolerating it because it is using the legacy plugin SDK. The following problems may be the cause of any confusing errors from downstream operations:

I hope that provides any clues for next steps. Is there anything else I should try?

mariekekortsmit commented 10 months ago

@algattik any thoughts that you have that could potentially help @triplebeta out?

algattik commented 10 months ago

We should try splitting the deployment in two Terraform deployments (Azure resources, and Databricks resources) and see if that solves the problem. Any takers?

triplebeta commented 10 months ago

I want to give it a go, also an opportunity to learn some terraform. But before investing the time to split the files I'd like to know more sure that it wills solve the issue. I wonder which resources cause the problem and which account it is that fails to authorize. Understanding that might help find more solutions.

Assuming it's a race condition, could it help to add a sleep somewhere? Might be more convenient, secure and straight forward than having to split files, manage 2 states and pass variables.

triplebeta commented 10 months ago

UPDATE: Today I tried deploying the solution to a sandbox environment in our company and it worked flawlessly the very first run. Don't understand why but I can get started to test it and show it to colleagues. For me no more need to make changes.

If we choose to use this for real, we'll have to incorporate some parts into our own terraform scripts anyway to be compliant with the policies in our subscription.

Thanks for all you work setting up this sample, much appreciated!