databricks / terraform-provider-databricks

Databricks Terraform Provider
https://registry.terraform.io/providers/databricks/databricks/latest
Other
445 stars 384 forks source link

[ISSUE] Issue with `databricks_mws_*` resource #2377

Closed anthonybench closed 1 year ago

anthonybench commented 1 year ago

Configuration

Followed your instructions outlined here

Expected Behavior

Excpet workspace resources to be provisioned.

Actual Behavior

'''txt Error: cannot create mws storage configurations: default auth: oauth-m2m: oidc: parse .well-known: invalid character '<' looking for beginning of value. Config: host=http;s"//accounts.cloud.databricks.com, client_id=, client_secret= ''' ☝️ this happens for databricks_mws_networks, databricks_mws_storage_configurations, databricks_mws_credentials, and hence can't provision the workspace itself.

I keep looking through the guides and examples, and believe I'm following everything law&letter. Other aws resources provision just fine, it seems to just not be able to authenticate to the account console to provision the mws resources.

Steps to Reproduce

walkthrough steps and pushed to GitLab for CI/CD job to run.

Terraform and provider versions

terraform {
  required_providers {
    databricks = {
      source = "databricks/databricks"
    }
    # aws = {
    #   source  = "hashicorp/aws"
    #   version = ">= 3.63.0"
    #   region  = var.region
    # }
  }
}

provider "aws" {
  region = var.region
}

provider "databricks" {
  alias         = "mws"
  host          = "https://accounts.cloud.databricks.com"
  client_id     = var.databricks_client_id
  client_secret = var.databricks_client_secret
}

Debug Output

2023-06-05T22:19:40.427Z [INFO] ReferenceTransformer: reference not found: "databricks_mws_networks.this#destroy" 2023-06-05T22:19:40.427Z [INFO] ReferenceTransformer: reference not found: "databricks_mws_credentials.this#destroy" 2023-06-05T22:19:40.427Z [INFO] ReferenceTransformer: reference not found: "databricks_mws_storage_configurations.this#destroy" 2023-06-05T22:19:42.386Z [DEBUG] module.atlas.module.databricks_workspace.databricks_mws_credentials.this: applying the planned Create change 2023-06-05T22:19:42.387Z [DEBUG] module.atlas.module.databricks_workspace.databricks_mws_storage_configurations.this: applying the planned Create change 2023-06-05T22:19:42.389Z [INFO] Starting apply for module.atlas.module.databricks_workspace.databricks_mws_networks.this 2023-06-05T22:19:42.390Z [DEBUG] module.atlas.module.databricks_workspace.databricks_mws_networks.this: applying the planned Create change 2023-06-05T22:19:42.390Z [DEBUG] provider.terraform-provider-databricks_v1.18.0: setting computed for "vpc_endpoints" from ComputedKeys: timestamp=2023-06-05T22:19:42.390Z 2023-06-05T22:19:42.390Z [DEBUG] provider.terraform-provider-databricks_v1.18.0: setting computed for "error_messages" from ComputedKeys: timestamp=2023-06-05T22:19:42.390Z 2023-06-05T22:19:42.889Z [ERROR] provider.terraform-provider-databricks_v1.18.0: Response contains error diagnostic: tf_proto_version=5.3 tf_provider_addr=registry.terraform.io/databricks/databricks tf_resource_type=databricks_mws_credentials @module=sdk.proto diagnostic_summary="cannot create mws credentials: default auth: oauth-m2m: oidc: parse .well-known: invalid character '<' looking for beginning of value. Config: host=https://accounts.cloud.databricks.com/, client_id=, client_secret=" tf_req_id=456812e2-3ca3-ad52-44e3-e36007a872e5 tf_rpc=ApplyResourceChange @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/diag/diagnostics.go:55 diagnostic_detail= diagnostic_severity=ERROR timestamp=2023-06-05T22:19:42.889Z 2023-06-05T22:19:43.254Z [DEBUG] DELETE https://gitlab.com/api/v4/projects/45980683/terraform/state/dev/lock 2023-06-05T22:19:43.458Z [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF" 2023-06-05T22:19:43.459Z [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/databricks/databricks/1.18.0/linux_amd64/terraform-provider-databricks_v1.18.0 pid=68 2023-06-05T22:19:43.459Z [DEBUG] provider: plugin exited

Important Factoids

None.

rishabhtrivedi23 commented 1 year ago

The error usually comes when you are passing account console URL in place of workspace URL.

anthonybench commented 1 year ago

@kb799dbg The docs say explicitly to set the host to the account console, which makes sense in my case as I'm attempting to provision a workspace, hence I'd think it would need to authenticate to the account.

See the screenshot below of databricks's docs:

Screenshot 2023-06-06 at 9 27 34 AM
rishabhtrivedi23 commented 1 year ago

@anthonybench you are correct, we need to provide account console URL. I haven't worked on AWS so it might be different in AWS but for both Azure and GCP, we need to pass account_id as well in provider.tf. You have mentioned you have already added account_id as env vars but I couldn't see in the provider. Can you share one of the mws* resource block ?

P.S . I am not from databricks and just trying to help in order to gain some knowledge from you guys :)

GCP : provider "databricks" { alias = "accounts" host = var.account_console_url account_id = var.databricks_account_id google_service_account = var.google_service_account_email rate_limit = 2 }

Azure: provider "databricks" { alias = "account-console" azure_workspace_resource_id = data.azurerm_databricks_workspace.adbworkspace.id azure_use_msi = true host = var.account_console_url account_id = var.account_id rate_limit = 5 }

anthonybench commented 1 year ago

@kb799dbg I actually (finally) just figured out the issue.

Even though I'm passing credential item databricks_account_id, databricks_client_id, databricks_client_secret, through the tf evn vars like TF_VAR_*, you also need the ones like DATABRICKS_ACCOUNT_ID, which is kind of infuriating. Unless I'm mistaken, you need duplicate variables for the same creds, but perhaps you can just not specify those things in the provider and not use the tf_var's at all.

Closing issue.

kopachevsky commented 1 year ago

@anthonybench I have exactly same issue and wasn't able to solve yours, can you write please what exactly you changed to make it work?

kopachevsky commented 1 year ago

@anthonybench exported those vars as you mentioned and it works, strange that Databrick guys don't mention this obvious stuff in very first terraform documentation, thanks for the hint