Closed DmitriSmirnovCTL closed 1 month ago
@DmitriSmirnovCTL thanks for creating this issue!
Could you share the TF definition of your API Key instance? It should be as follows:
resource "confluent_api_key" "env-manager-cloud-api-key" {
display_name = "env-manager-cloud-api-key"
description = "Cloud API Key that is owned by 'env-manager' service account"
owner {
id = confluent_service_account.env-manager.id
api_version = confluent_service_account.env-manager.api_version
kind = confluent_service_account.env-manager.kind
}
lifecycle {
prevent_destroy = true
}
}
This is how resource looks in state after the import:
resource "confluent_api_key" "XXX-confluent-integration-api-key" { description = "api key for XXX confluent cloud integration" disable_wait_for_ready = false display_name = "XXX-confluent-integration-api-key" id = "XXX" secret = (sensitive value)
managed_resource {
api_version = null
id = null
kind = null
environment {
id = "env-XXX"
}
}
owner {
api_version = "iam/v2"
id = "u-XXX"
kind = "User"
}
}
Resource definition in terraform:
resource "confluent_api_key" "XXX-confluent-integration-api-key" { description = "api key for XXX confluent cloud integration"
owner {
api_version = data.confluent_user.XXX.api_version
id = data.confluent_user.XXX.id
kind = data.confluent_user.XXX.kind
}
}
The discrepancy around managed_resource is causing API resource to be redeployed:
-/+ resource "confluent_api_key" "XXX-confluent-integration-api-key" {
display_name = "XXX-confluent-integration-api-key" -> null ~ id = "XXX" -> (known after apply) ~ secret = (sensitive value)
managed_resource { # forces replacement id = null
- environment { # forces replacement
- id = "env-XXX" -> null # forces replacement
}
}
# (1 unchanged block hidden)
}
Plan: 1 to add, 0 to change, 1 to destroy.
I see, could you update your TF definition to be
resource "confluent_api_key" "XXX-confluent-integration-api-key" {
description = "api key for XXX confluent cloud integration"
disable_wait_for_ready = false
display_name = "XXX-confluent-integration-[api-key](https://confluentinc.atlassian.net/browse/API-key)"
id = "XXX"
secret = (sensitive value)
owner {
api_version = "iam/v2"
id = "u-XXX"
kind = "User"
}
In other words, we want you to remove
managed_resource {
api_version = null
id = null
kind = null
environment {
id = "env-XXX"
}
}
from it.
I don't have managed_resource block in terraform definition but it presents in state (obtained by import) and terraform wants to redeploy the resource due to inconsistency between definition and state (that I guess will cause my secret to be rotated and I'm trying to avoid that)
Oh I see, could you copy the terraform definition of your
resource "confluent_api_key"
then? We'd like to be able to reproduce this issue, thank you!
cc @DmitriSmirnovCTL
My apology - was confused by the usage of Environment ID in import command. Basically one can import Cloud API Key with Environment ID and it will create invalid managed_resource section.
Import of Cloud API Keys brings managed_resource definition block to state with environment id defined and api_version/id/kind as null. Due to this - unable to remove managed_resource block since it will redeploy related API. Is it a bug or where is a workaround for this issue?