Open GabrielArcanjoFerreira opened 9 months ago
# This file is maintained automatically by "terraform init". # Manual edits may be lost in future updates. provider "registry.terraform.io/databricks/databricks" { version = "1.27.0" constraints = "~> 1.27.0" } provider "registry.terraform.io/hashicorp/aws" { version = "5.19.0" constraints = ">= 5.0.0, ~> 5.0" } provider "registry.terraform.io/hashicorp/random" { version = "3.5.1" constraints = "~> 3.4" } provider "registry.terraform.io/hashicorp/time" { version = "0.9.1" constraints = "~> 0.9" }
It should be able to run terraform plan or terraform apply command without errors.
Error when applying the terraform plan command, always raising the "cannot read " exception.
terraform plan
terraform { required_version = "= 1.1.5"
required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" }
random = { source = "hashicorp/random" version = "~> 3.4" } time = { source = "hashicorp/time" version = "~> 0.9" } databricks = { source = "databricks/databricks" version = "~> 1.27.0" }
} }
╷ │ Error: Failed to decode resource from state │ │ Error decoding "module.unity_catalog.databricks_metastore_data_access.this" from previous state: unsupported attribute │ "configuration_type" ╵ ╷ │ Error: cannot read cluster: default auth: oauth-m2m: oidc: fetch .well-known: Get "/oidc/.well-known/oauth-authorization-server": unsupported protocol scheme "". Config: client_id=, client_secret= │ │ with databricks_cluster.dev_cluster, │ on databricks_clusters.tf line 19, in resource "databricks_cluster" "dev_cluster": │ 19: resource "databricks_cluster" "dev_cluster" { │ ╵ ╷ │ Error: cannot read cluster: default auth: oauth-m2m: oidc: fetch .well-known: Get "/oidc/.well-known/oauth-authorization-server": unsupported protocol scheme "". Config: client_id=, client_secret= │ │ with databricks_cluster.hml_cluster, │ on databricks_clusters.tf line 51, in resource "databricks_cluster" "hml_cluster": │ 51: resource "databricks_cluster" "hml_cluster" { │ ╵ ╷ │ Error: cannot read cluster: default auth: oauth-m2m: oidc: fetch .well-known: Get "/oidc/.well-known/oauth-authorization-server": unsupported protocol scheme "". Config: client_id=, client_secret= │ │ with databricks_cluster.prd_cluster, │ on databricks_clusters.tf line 83, in resource "databricks_cluster" "prd_cluster": │ 83: resource "databricks_cluster" "prd_cluster" { │ ╵ ╷ │ Error: cannot read metastore: default auth: oauth-m2m: oidc: fetch .well-known: Get "/oidc/.well-known/oauth-authorization-server": unsupported protocol scheme "". Config: client_id=, client_secret= │ │ with module.unity_catalog.databricks_metastore.this, │ on modules/aws-databricks-unity-catalog/main.tf line 1, in resource "databricks_metastore" "this": │ 1: resource "databricks_metastore" "this" { │ ╵
The workspace deployment occurs normally, the problem starts after the deployment when I try to update the code.
@GabrielArcanjoFerreira Please share the configuration file.
Configuration
Expected Behavior
It should be able to run terraform plan or terraform apply command without errors.
Actual Behavior
Error when applying the terraform plan command, always raising the "cannot read" exception.
Steps to Reproduce
terraform plan
Terraform and provider versions
terraform { required_version = "= 1.1.5"
required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" }
} }
Debug Output
╷ │ Error: Failed to decode resource from state │ │ Error decoding "module.unity_catalog.databricks_metastore_data_access.this" from previous state: unsupported attribute │ "configuration_type" ╵ ╷ │ Error: cannot read cluster: default auth: oauth-m2m: oidc: fetch .well-known: Get "/oidc/.well-known/oauth-authorization-server": unsupported protocol scheme "". Config: client_id=, client_secret= │ │ with databricks_cluster.dev_cluster, │ on databricks_clusters.tf line 19, in resource "databricks_cluster" "dev_cluster": │ 19: resource "databricks_cluster" "dev_cluster" { │ ╵ ╷ │ Error: cannot read cluster: default auth: oauth-m2m: oidc: fetch .well-known: Get "/oidc/.well-known/oauth-authorization-server": unsupported protocol scheme "". Config: client_id=, client_secret= │ │ with databricks_cluster.hml_cluster, │ on databricks_clusters.tf line 51, in resource "databricks_cluster" "hml_cluster": │ 51: resource "databricks_cluster" "hml_cluster" { │ ╵ ╷ │ Error: cannot read cluster: default auth: oauth-m2m: oidc: fetch .well-known: Get "/oidc/.well-known/oauth-authorization-server": unsupported protocol scheme "". Config: client_id=, client_secret= │ │ with databricks_cluster.prd_cluster, │ on databricks_clusters.tf line 83, in resource "databricks_cluster" "prd_cluster": │ 83: resource "databricks_cluster" "prd_cluster" { │ ╵ ╷ │ Error: cannot read metastore: default auth: oauth-m2m: oidc: fetch .well-known: Get "/oidc/.well-known/oauth-authorization-server": unsupported protocol scheme "". Config: client_id=, client_secret= │ │ with module.unity_catalog.databricks_metastore.this, │ on modules/aws-databricks-unity-catalog/main.tf line 1, in resource "databricks_metastore" "this": │ 1: resource "databricks_metastore" "this" { │ ╵
Important Factoids
The workspace deployment occurs normally, the problem starts after the deployment when I try to update the code.