Closed FireDrunk closed 8 months ago
Thanks for creating new issue @FireDrunk!
Could you share more details about "migration" of
resource "confluent_kafka_mirror_topic" "topic"
to
resource "confluent_kafka_topic" "topic" {
?
Do you mean you updated mirror_topic's status (via failover or something) such that it became a regular topic that you imported later?
For the second question: could you confirm you were using these commands for importing a topic:
# Option #1: Manage multiple Kafka clusters in the same Terraform workspace
$ export IMPORT_KAFKA_API_KEY="<kafka_api_key>"
$ export IMPORT_KAFKA_API_SECRET="<kafka_api_secret>"
$ export IMPORT_KAFKA_REST_ENDPOINT="<kafka_rest_endpoint>"
$ terraform import confluent_kafka_topic.my_topic lkc-abc123/orders-123
I'm also a bit confused why even partitions_count = 8 -> 6
was updated 🤔
Migration in this scenario means updating the status
field to promoted
and waiting for the topic to reach the Promoted status in the cluster link overview.
After that (to my understanding) it's just a regular topic on the Kafka Cluster.
I can confirm we are using exactly those commands to import the topic.
That last change is probably due to the fact that we did not specify the config {}
block matching the old topic that was imported, that's our fault ;)
Update: When removing the 'rest_endpoint' variable from the new resource it seems to work, but that's not our setup, since we do not want to provide any credentials or endpoints via Provider configuration, but retrieve those from Azure Keyvault.
That is unsupported by Terraform for Provider credentials.
Gotcha, it seems like the question could be reduced to
how do I import an existing Kafka topic such that the target TF config is something like (without credentials block and rest_endpoint attribute):
resource "confluent_kafka_topic" "topic" { config = { "cleanup.policy" = "delete" "delete.retention.ms" = "86400000" "max.compaction.lag.ms" = "9223372036854775807" "max.message.bytes" = "1000000" "message.timestamp.difference.max.ms" = "9223372036854775807" "message.timestamp.type" = "CreateTime" "min.compaction.lag.ms" = "0" "retention.bytes" = "-1" "retention.ms" = "604800000" } id = "lkc-<id>/dummy-topic" partitions_count = 8 topic_name = "dummy-topic"
kafka_cluster {
id = "lkc-<id>"
}
}
Seems like you might want to follow [Option 2](https://registry.terraform.io/providers/confluentinc/confluent/latest/docs/resources/confluent_kafka_topic#option-2-manage-a-single-kafka-cluster-in-the-same-terraform-workspace):
provider "confluent" { kafka_id = var.kafka_id # optionally use KAFKA_ID env var kafka_rest_endpoint = var.kafka_rest_endpoint # optionally use KAFKA_REST_ENDPOINT env var kafka_api_key = var.kafka_api_key # optionally use KAFKA_API_KEY env var kafka_api_secret = var.kafka_api_secret # optionally use KAFKA_API_SECRET env var }
resource "confluent_kafka_topic" "orders" {
partitions_count = 8
topic_name = "dummy-topic"
config = {
"cleanup.policy" = "delete"
"delete.retention.ms" = "86400000"
"max.compaction.lag.ms" = "9223372036854775807"
"max.message.bytes" = "1000000"
"message.timestamp.difference.max.ms" = "9223372036854775807"
"message.timestamp.type" = "CreateTime"
"min.compaction.lag.ms" = "0"
"retention.bytes" = "-1"
"retention.ms" = "604800000"
}
}
and use the following [import commands](https://registry.terraform.io/providers/confluentinc/confluent/latest/docs/resources/confluent_kafka_topic#option-2-manage-a-single-kafka-cluster-in-the-same-terraform-workspace) that will leverage credentials from your `provider` block:
$ terraform import confluent_kafka_topic.my_topic lkc-abc123/orders-123
@FireDrunk let us know whether it helps!
Ah, that means we have to move our Azure Keyvault logic from Terraform to our CICD, which is acceptable but preferred.
Our design was built on the idea that we might have to manage multiple Confluent Clusters in 1 run, but currently it's just one, so it's a workaround that works for now.
Thanks for your quick response!
Thanks for your quick response!
Thank you!
Our design was built on the idea that we might have to manage multiple Confluent Clusters in 1 run, but currently it's just one, so it's a workaround that works for now.
Feel free to use different folders for TF configuration files / modules for other clusters that'd still allow your team to reuse Option 2 with
provider "confluent" {
kafka_id = var.kafka_id # optionally use KAFKA_ID env var
kafka_rest_endpoint = var.kafka_rest_endpoint # optionally use KAFKA_REST_ENDPOINT env var
kafka_api_key = var.kafka_api_key # optionally use KAFKA_API_KEY env var
kafka_api_secret = var.kafka_api_secret # optionally use KAFKA_API_SECRET env var
}
@linouk23 After reworking our setup, it still seems that importing is extremely buggy regarding the rest_endpoint:
Resource definition:
resource "confluent_kafka_mirror_topic" "topic" {
source_kafka_topic {
topic_name = var.topic_info.name
}
cluster_link {
link_name = local.cluster_link_name
}
kafka_cluster {
id = data.confluent_kafka_cluster.shared-cluster.id
}
mirror_topic_name = var.topic_info.name
status = var.topic_info.mirror_info.status
lifecycle {
prevent_destroy = true
precondition {
condition = contains(["ACTIVE", "PAUSED", "PROMOTED", "FAILED_OVER"], var.topic_info.mirror_info.status)
error_message = "Invalid Status field, valid options are: ACTIVE, PAUSED, PROMOTED and FAILED_OVER"
}
}
}
Environment vars:
❯ env | grep -i kafka
IMPORT_KAFKA_API_KEY=<key>
IMPORT_KAFKA_REST_ENDPOINT=https://lkc-<id>-pldo48.westeurope.azure.glb.confluent.cloud:443
KAFKA_API_SECRET=<secret>
KAFKA_REST_ENDPOINT=https://lkc-<id>-pldo48.westeurope.azure.glb.confluent.cloud:443
KAFKA_API_KEY=<key>
IMPORT_KAFKA_API_SECRET=<secret>
Importing this mirror_topic using the following command:
terraform import module.router.module.confluent_topics[\"dummy-topic\"].confluent_kafka_mirror_topic.topic lkc-<id>/default-accept-cluster-link/dummy-topic
Works:
module.router.module.confluent_topics["dummy-topic"].confluent_kafka_mirror_topic.topic: Import prepared!
Prepared confluent_kafka_mirror_topic for import
---
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
After running a new plan:
# module.router.module.confluent_topics["dummy-topic"].confluent_kafka_mirror_topic.topic must be replaced
-/+ resource "confluent_kafka_mirror_topic" "topic" {
~ id = "lkc-<id>/default-accept-cluster-link/dummy-topic" -> (known after apply)
# (2 unchanged attributes hidden)
~ kafka_cluster {
id = "lkc-<id>"
- rest_endpoint = "https://lkc-<id>-pldo48.westeurope.azure.glb.confluent.cloud:443" -> null # forces replacement
- credentials {
- key = (sensitive value) -> null
- secret = (sensitive value) -> null
}
}
# (2 unchanged blocks hidden)
}
It seems that the mirror_topic resource does add the rest_endpoint forcefully to the state, but our terraform cod doesn't, so it wants to replace the resource (which is bad!).
👋 @FireDrunk could you update your TF definition to include missing rest_endpoint & credentials attributes:
resource "confluent_kafka_mirror_topic" "example" {
source_kafka_topic {
topic_name = "orders"
}
cluster_link {
link_name = confluent_cluster_link.source-outbound.link_name
}
kafka_cluster {
id = data.confluent_kafka_cluster.destination.id
rest_endpoint = data.confluent_kafka_cluster.destination.rest_endpoint
credentials {
key = confluent_api_key.app-manager-destination-cluster-api-key.id
secret = confluent_api_key.app-manager-destination-cluster-api-key.secret
}
}
lifecycle {
prevent_destroy = true
}
}
Namely,
rest_endpoint = data.confluent_kafka_cluster.destination.rest_endpoint
credentials {
key = confluent_api_key.app-manager-destination-cluster-api-key.id
secret = confluent_api_key.app-manager-destination-cluster-api-key.secret
}
as specified in the docs and try again? Thank you!
@linouk23 Ok, so:
Adding the credentials
and rest_endpoint
block to the confluent_kafka_mirror_topic
resource works for importing the mirror_topic.
After which running terraform apply
still seems to work (I can promote the topic).
Omitting the credentials
and rest_endpoint
block on the regular confluent_kafka_topic
does work when importing that after promoting it.
This seems a bit counter-intuitive... But technically it seems to work...
Adding the credentials
and rest_endpoint
after (succefully) importing the resource, still taints the resource.
@FireDrunk, it might be important to mention that a user is not supposed to alter the resource definition after importing it.
In other words, as specified in the docs, confluent_kafka_topic
supports both options with different import commands and you might want to select either Option 1
or Option 2
when importing a topic and switching between them isn't supported after the import (unless you'll do terraform state rm confluent_kafka_topic.example
and reimport a topic again using the other option).
Let us know if that helps and feel free to reopen this issue / create a new issue for follow up questions!
We are testing out the full migration process of Kafka OnPremise using Cluster Linking. This configuration is managed by Terraform, and after migrating the following resource:
to this resource:
by importing it manually. When importing it, it seems to work fine (when setting the correct env vars). The state of the object after import:
But after planning it wants to replace the topic:
I presume the reason it's being replaced is that its rest_endpoint is missing in the Terraform state, but that's incorrect, since it got imported using that endpoint.
I think somewhere around here: https://github.com/confluentinc/terraform-provider-confluent/blob/master/internal/provider/resource_kafka_topic.go#L282
I think it should set the rest_endpoint on the topic that was used for importing the topic, or that should be an import flag.