databricks / terraform-provider-databricks

Databricks Terraform Provider
https://registry.terraform.io/providers/databricks/databricks/latest
Other
451 stars 389 forks source link

[ISSUE] Issue with `databricks_storage_credential` resource with MSI. 500 Internal Server Error #3846

Open slideroh opened 3 months ago

slideroh commented 3 months ago

Configuration

provider "databricks" {
  # Hardcoded URL of the Databricks Workspace
  host = "https://<snip>.azuredatabricks.net"
}

resource "azurerm_databricks_access_connector" "main" {
  name = "test-connector"
  location = var.location
  resource_group_name = var.rg
  identity {
    type = "SystemAssigned"
  }
}

resource "databricks_storage_credential" "main" {
  name = "creds_access_connector"
  azure_managed_identity {
    access_connector_id = azurerm_databricks_access_connector.main.id
  }
  comment = "Managed by TF"
}

Expected Behavior

Terraform should return error with message or create a Storage Credential. I'd like to mention here, that when I passed the same access_connector_id and similar name manually in Databricks Workspace, the storage credential was created without any issues. ### Actual Behavior

In TF_LOG=DEBUG

2024-08-02T15:30:07.044+0200 [DEBUG] provider.terraform-provider-databricks_v1.49.1: POST /api/2.1/unity-catalog/storage-credentials
> {
>   "azure_managed_identity": {
>     "access_connector_id": "/subscriptions/<snip>/resourceGroups/databricks/providers/Microsof... (44 more bytes)"
>   },
>   "comment": "Managed by TF",
>   "name": "creds_access_connector"
> }
< HTTP/2.0 500 Internal Server Error
< {
<   "details": [
<     {
<       "@type": "type.googleapis.com/google.rpc.RequestInfo",
<       "request_id": "7b19d2e7-0d7e-4a10-992a-6c72ace81bf6",
<       "serving_data": ""
<     }
<   ],
<   "error_code": "INTERNAL_ERROR",
<   "message": ""

In normal run;

│ Error: cannot create storage credential:

There is basically no error, there is only 500 Internal Server Error. I'd like also mention that other resource, like secret-scope I'm able to create via the same config, just different resource.

Steps to Reproduce

  1. terraform apply

Terraform and provider versions

Is it a regression?

Checked previous version of provider, error stil exists

Debug Output

2024-08-02T16:22:15.908+0200 [DEBUG] provider.terraform-provider-databricks_v1.49.1: non-retriable error: : tf_provider_addr=registry.terraform.io/databricks/databricks tf_rpc=ApplyResourceChange tf_resource_type=databricks_storage_credential @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/logger/logger.go:33 @module=databricks tf_req_id=789b9f29-ee36-e205-0295-b7c6ca16d81b timestamp="2024-08-02T16:22:15.908+0200"
2024-08-02T16:22:15.908+0200 [ERROR] provider.terraform-provider-databricks_v1.49.1: Response contains error diagnostic: @caller=/home/runner/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.23.0/tfprotov5/internal/diag/diagnostics.go:58 @module=sdk.proto diagnostic_severity=ERROR tf_provider_addr=registry.terraform.io/databricks/databricks tf_rpc=ApplyResourceChange tf_resource_type=databricks_storage_credential diagnostic_detail="" diagnostic_summary="cannot create storage credential: " tf_proto_version=5.6 tf_req_id=789b9f29-ee36-e205-0295-b7c6ca16d81b timestamp="2024-08-02T16:22:15.908+0200"

https://gist.github.com/slideroh/a40b0ec61eb4f90d2ad19a49baaa98fb

Important Factoids

Would you like to implement a fix?

slideroh commented 3 months ago

Actually Im super suprised because I've checked many versions: like

Plan: 1 to add, 0 to change, 0 to destroy.

databricks_storage_credential.main_test: Creating... databricks_storage_credential.main_test: Creation complete after 3s [id=test]



but I would like to use the latest one, not 2 years old version. What changed here? The config of resource is the same.
philippbussche commented 2 months ago

@slideroh it seems we are experiencing the same issue. Creating the storage credential via the Workspace UI works though. And I am pretty confident that creating it using Terraform (version 1.47.0) also worked a few weeks ago still. So I would say maybe the API is broken ? I mean that it what the Server 500 error suggests, right ?

philippbussche commented 2 months ago

quick update @slideroh : I got this to work after making my user a Databricks account admin. Then the Terraform way worked. Really strange though because on the workspace UI it also works without being Databricks account admin.