Closed patrickschmelter closed 2 years ago
Thanks for raising this question! We're aware of this issue and we're going to address it in one of our future releases 👍
hi
what is the expected release date of this feature? it's in the way of our work in automating topic configuration because of the manual step.
how will this feature work? Given that on the Confluent CLI, I can type as follows:
$ confluent kafka topic create help
Created topic "help".
and the topic is created!
👋 @scotartt, here's a sample of TF configuration with api_key
resource that displays how it will work (TLDR: create Kafka cluster + Kafka API Key + Kafka Topic (and other resources) in a single terraform apply
run):
provider "confluentcloud" {
api_key = var.confluent_cloud_api_key
api_secret = var.confluent_cloud_api_secret
}
resource "confluentcloud_environment" "staging" {
display_name = "Staging"
}
# Update the config to use a cloud provider and region of your choice.
# https://registry.terraform.io/providers/confluentinc/confluentcloud/latest/docs/resources/confluentcloud_kafka_cluster
resource "confluentcloud_kafka_cluster" "basic" {
display_name = "inventory"
availability = "SINGLE_ZONE"
cloud = "AWS"
region = "us-east-2"
basic {}
environment {
id = confluentcloud_environment.staging.id
}
}
// 'app-manager' service account is required in this configuration to create 'orders' topic and grant ACLs
// to 'app-producer' and 'app-consumer' service accounts.
resource "confluentcloud_service_account" "app-manager" {
display_name = "app-manager"
description = "Service account to manage 'inventory' Kafka cluster"
}
resource "confluentcloud_role_binding" "app-manager-kafka-cluster-admin" {
principal = "User:${confluentcloud_service_account.app-manager.id}"
role_name = "CloudClusterAdmin"
crn_pattern = confluentcloud_kafka_cluster.basic.rbac_crn
}
resource "confluentcloud_api_key" "app-manager-kafka-api-key" {
display_name = "app-manager-kafka-api-key"
description = "Kafka API Key that is owned by 'app-manager' service account"
owner {
id = confluentcloud_service_account.app-manager.id
api_version = confluentcloud_service_account.app-manager.api_version
kind = confluentcloud_service_account.app-manager.kind
}
managed_resource {
id = confluentcloud_kafka_cluster.basic.id
api_version = confluentcloud_kafka_cluster.basic.api_version
kind = confluentcloud_kafka_cluster.basic.kind
environment {
id = confluentcloud_environment.staging.id
}
}
# The goal is to ensure that confluentcloud_role_binding.app-manager-kafka-cluster-admin is created before
# confluentcloud_api_key.app-manager-kafka-api-key is used to create instances of
# confluentcloud_kafka_topic, confluentcloud_kafka_acl resources.
# 'depends_on' meta-argument is specified in confluentcloud_api_key.app-manager-kafka-api-key to avoid having
# multiple copies of this definition in the configuration which would happen if we specify it in
# confluentcloud_kafka_topic, confluentcloud_kafka_acl resources instead.
depends_on = [
confluentcloud_role_binding.app-manager-kafka-cluster-admin
]
}
resource "confluentcloud_kafka_topic" "orders" {
kafka_cluster = confluentcloud_kafka_cluster.basic.id
topic_name = "orders"
partitions_count = 4
config = {
// Example of overriding the default parameter value (2097164) of 'max.message.bytes' topic setting
// https://docs.confluent.io/cloud/current/clusters/broker-config.html
"max.message.bytes" = "2097165"
}
http_endpoint = confluentcloud_kafka_cluster.basic.http_endpoint
credentials {
key = confluentcloud_api_key.app-manager-kafka-api-key.id
secret = confluentcloud_api_key.app-manager-kafka-api-key.secret
}
}
...
We're testing out api_key
resource at the moment and targeting end of March.
Hi @linouk23
What if we don't want to create the environment/cluster all in one go? Our TF approach is that one pipeline will run to create the target environments and clusters (because the former especially needs OrgAdmin rights to create and the latter needs EnvironmentAdmin over the environment at minimum), and then multiple other pipelines which each contain only a small subset of the topics that any given application requires. This latter pipeline would get credentials injected to it that give it only Admin rights in the target cluster.
Sure, you could create environment, cluster and app-manager service account using pipeline #1
(TF module #1
).
And then run pipeline #2
(TF module #2
) that would use Cloud API Key of a service account with EnvironmentAdmin
role instead:
provider "confluentcloud" {
# Cloud API Key of a service account with EnvironmentAdmin role
api_key = var.confluent_cloud_api_key
api_secret = var.confluent_cloud_api_secret
}
to create just Kafka API Key and Kafka Topic for the target cluster (including other resources like ACLs etc if necessary).
It's worth mentioning that module #1
should output Kafka Cluster ID, app-manager
Service Account ID and other metadata and module #2
should accept these.
In upcoming release, we're going to share these examples (or even TF modules) to make using TF Provider for CC a bit easier.
Let me know if it helps.
Hi, after working on this for the past week I've got some additional feedback.
Assuming a configuration for a topic as you had above:
resource "confluentcloud_kafka_topic" "orders" {
kafka_cluster = confluentcloud_kafka_cluster.basic.id
topic_name = "orders"
partitions_count = 4
config = {
// Example of overriding the default parameter value (2097164) of 'max.message.bytes' topic setting
// https://docs.confluent.io/cloud/current/clusters/broker-config.html
"max.message.bytes" = "2097165"
}
http_endpoint = confluentcloud_kafka_cluster.basic.http_endpoint
credentials {
key = confluentcloud_api_key.app-manager-kafka-api-key.id
secret = confluentcloud_api_key.app-manager-kafka-api-key.secret
}
}
For comparison, I'm basing my experience over what I need to do to create these similar objects with the Confluent CLI v2.
The problems all start and mainly end with these parameters:
http_endpoint = confluentcloud_kafka_cluster.basic.http_endpoint
credentials {
key = confluentcloud_api_key.app-manager-kafka-api-key.id
secret = confluentcloud_api_key.app-manager-kafka-api-key.secret
}
The http_endpoint
appears to be a a complex object, and is hard to pass into a module as an argument. The cluster id and environment id can be passed (as strings), but to get the just the http_endpoint I have to have access to the entire cluster object in Terraform, which presents difficulties (i.e. I have to have the entire cluster state available to the topic pipeline, read into a data
source, just to get this endpoint and nothing else).
In the Confluent CLI v2 this parameter is completely unnecessary: I'm not even sure there's a way to pass it as an argument.
The credentials {}
block could accept those parameters as arguments, which have been read from an AWS SSM object or other similar secure configuration store. Bearing in mind I want to execute the topic pipeline with a Confluent API key that does not have the permissions to create that user (i.e. pipeline 1. or an intermediate pipeline between that pipeline and this one, has created the service account and the API Keys and injected them into my secure configuration store) . This is workable (unlike what I see with the http_endpoint
currently), but very clunky.
Again, in the Confluent CLI v2, it is not necessary to have pre-created credentials in order then to create a topic. When I create a topic with the v2 CLI, I do not need anything of these parameters, therefore I know that none of them are or should be necessary to the creation of a topic:
confluent kafka topic create <my_topic> --partitions 3 --environment <env_id> --cluster <cluster_id>
IMO the v2 CLI is pretty clean, and it would be nice if the TF provider followed its example.
thanks for your help.
Thanks for the message @scotartt!
Sounds like you'd prefer this resource definition then (similar to CLI design), is that accurate?
resource "confluentcloud_kafka_topic" "orders" {
kafka_cluster = confluentcloud_kafka_cluster.basic.id
environment {
id = "env-12345"
}
topic_name = "orders"
partitions_count = 4
credentials {
key = confluentcloud_api_key.app-manager-kafka-api-key.id
secret = confluentcloud_api_key.app-manager-kafka-api-key.secret
}
}
to avoid having http_endpoint
attribute?
As you mentioned, one quick workaround could be to have a Kafka cluster data source:
data "confluentcloud_kafka_cluster" "basic" {
id = "lkc-abc123"
environment {
id = "env-xyz456"
}
}
resource "confluentcloud_kafka_topic" "orders" {
kafka_cluster = data.confluentcloud_kafka_cluster.basic.id
topic_name = "orders"
partitions_count = 4
credentials {
key = confluentcloud_api_key.app-manager-kafka-api-key.id
secret = confluentcloud_api_key.app-manager-kafka-api-key.secret
}
http_endpoint = data.confluentcloud_kafka_cluster.test-basic-cluster.http_endpoint
}
Hi guys, any updates on the planned release date for the new confluentcloud_api_key resource? This is quite crucial for us, as the terraform provider is not really useful, without a way to manage api keys. Greetings Valentin
👋 @vweckerle @bohdanverdyi
we're targeting to release a new version of TF Provider (with api_key
resource) on April 28th 🤞
@Jaxwood, @Marcus-James-Adams, @ronald05arias, @czerasz-mineiros, @mparker-variant, @jorgenfries, @PlugaruT, @mikhailznak @patrickschmelter @scotartt @vweckerle we're very excited to let you know we've just published a new version of TF Provider that includes api_key
resource among other very exciting improvements: it enables fully automated provisioning of our key Kafka workflows (see the demo) with no more manual intervention and makes it our biggest and most impactful release.
The only gotcha we've renamed it from confluentinc/confluentcloud
to confluentinc/confluent
but we published a migration guide so it should be fairly straightforward. The existing confluentinc/confluentcloud
will be deprecated soon so we'd recommend switching as soon as possible.
New confluentinc/confluent
provider also includes a lot of sample configurations so you won't need to write them from scratch. You can find them here, find a full list of changes here.
It would be super useful to be able to give a service account the rights to access a cluster before creating it.
I have created a service account and am able to create a cluster with its access key. But, as of my understanding, in order to access the newly created cluster and create topics in it I have to log in to the confluent cloud and create new access keys for that specific cluster and use those to create topics.
Ideally that would be possible using the global access keys, so that you would be able to automatically create a new cluster, create all kinds of topics and configurations there, run some extensive tests and delete the cluster afterwards.