Closed bluedog13 closed 2 years ago
@bluedog13 Would dynamic Terraform blocks help your setup? Not the prettiest but could help with your code re-use:
resource "confluentcloud_kafka_cluster" "example" {
display_name = var.example_kafka_cluster_name
availability = "SINGLE_ZONE"
cloud = "AZURE"
region = var.azure_us_east_2
dynamic "basic" {
for_each = var.type == "basic" ? [1] : []
content {}
}
dynamic "standard" {
for_each = var.type == "standard" ? [1] : []
content {}
}
dynamic "dedicated" {
for_each = var.type == "dedicated" ? [1] : []
content {
cku = 1
}
}
Example variable definition:
variable "type" {
type = string
description = "Type of Kafka cluster to deploy"
validation {
condition = contains(["dedicated", "basic", "standard"], var.type)
error_message = "Cluster type must be \"basic\", \"standard\", or \"dedicated\"."
}
}
Thank you kindly. I will look into this. I was not aware of the dynamic workaround option.
@jfosdick thanks for helping out, that looks like an amazing idea!
It would help a lot if the cluster type had the ability to be passed/read as an argument. Something along the lines of
type = var.cluster_type # basic/standard/dedicated
@bluedog13 the reason why we're a little bit hesitant going with this design is there're some config options that only makes sense for dedicated Kafka cluster (and most likely we won't have any config expected for Basic Kafka clusters at all) so in some sense you might think about cluster type and its config as something tightly coupled (that you can see in the API docs as well) vs 2 random attributes in the same list.
type = basic
config {}
# and
type = dedicated
config {
cku = 2
}
basic {}
# and
dedicated {
cku = 2
# potentially some other configs
}
and we liked the latter a bit more.
Hi, @jfosdick
I've tried to use your suggestion:
resource "confluentcloud_kafka_cluster" "this" {
display_name = var.cluster_name
availability = var.availability
cloud = var.cloud
region = var.region
dynamic "basic" {
for_each = var.type == "basic" ? [1] : []
content {}
}
dynamic "standard" {
for_each = var.type == "standard" ? [1] : []
content {}
}
dynamic "dedicated" {
for_each = var.type == "dedicated" ? [1] : []
content {
cku = var.dedicated_cku
}
}
environment {
id = var.environment_id
}
}
But, I've gotten this error:
Error: Invalid combination of arguments
on main.tf line 5, in resource "confluentcloud_kafka_cluster" "this":
5: resource "confluentcloud_kafka_cluster" "this" {
"basic": only one of `basic,dedicated,standard` can be specified, but
`basic,standard` were specified.
Error: Invalid combination of arguments
on main.tf line 5, in resource "confluentcloud_kafka_cluster" "this":
5: resource "confluentcloud_kafka_cluster" "this" {
"standard": only one of `basic,dedicated,standard` can be specified, but
`basic,standard` were specified.
Error: Invalid combination of arguments
on main.tf line 5, in resource "confluentcloud_kafka_cluster" "this":
5: resource "confluentcloud_kafka_cluster" "this" {
"dedicated": only one of `basic,dedicated,standard` can be specified, but
`basic,standard` were specified.
I've tried to resolve this error but all my attempts failed.
Would you have any suggestions to resolve that??
Thanks!
@jmborsani based on the error message, it looks like you've passed both basic
, standard
somehow.
Could you try the following instead (that's a full code for @jfosdick's idea)? So if you create example
directory and then create 3 files: examples/main.tf
, examples/variables.tf
, examples/terraform.tfvars
in it:
main.tf
:
# main.tf
# Configure Confluent Cloud provider
terraform {
required_providers {
confluentcloud = {
source = "confluentinc/confluentcloud"
version = "0.5.0"
}
}
}
provider "confluentcloud" {}
resource "confluentcloud_kafka_cluster" "example" { display_name = var.example_kafka_cluster_name availability = "SINGLE_ZONE" cloud = "AZURE" region = var.azure_region
dynamic "basic" { for_each = var.type == "basic" ? [1] : [] content {} }
dynamic "standard" { for_each = var.type == "standard" ? [1] : [] content {} }
dynamic "dedicated" { for_each = var.type == "dedicated" ? [1] : [] content { cku = 1 } }
environment { id = var.environment } }
* `terraform.tfvars`:
type="standard" example_kafka_cluster_name="test_cluster" azure_region="centralus" environment="env-12345"
* `variables.tf`:
variable "type" { type = string description = "Type of Kafka cluster to deploy" validation { condition = contains(["dedicated", "basic", "standard"], var.type) error_message = "Cluster type must be \"basic\", \"standard\", or \"dedicated\"." } }
variable "example_kafka_cluster_name" { type = string description = "Kafka Cluster name" }
variable "azure_region" { type = string description = "Azure region" }
variable "environment" { type = string description = "Environment ID" }
### Testing
$ terraform validate Success! The configuration is valid.
$ terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
Terraform will perform the following actions:
resource "confluentcloud_kafka_cluster" "example" {
api_version = (known after apply)
availability = "SINGLE_ZONE"
bootstrap_endpoint = (known after apply)
cloud = "AZURE"
display_name = "test_cluster"
http_endpoint = (known after apply)
id = (known after apply)
kind = (known after apply)
rbac_crn = (known after apply)
region = "centralus"
environment {
standard {} }
Plan: 1 to add, 0 to change, 0 to destroy.
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
Let me know if it helps.
Hi, @linouk23! That worked! I also needed to update the Terraform version to 0.15.5:
Terraform v0.15.5
on linux_amd64
+ provider registry.terraform.io/confluentinc/confluentcloud v0.5.0
+ provider registry.terraform.io/hashicorp/vault v3.2.1
TF-Plan:
Terraform will perform the following actions:
# confluentcloud_kafka_cluster.this will be created
+ resource "confluentcloud_kafka_cluster" "this" {
+ api_version = (known after apply)
+ availability = "MULTI_ZONE"
+ bootstrap_endpoint = (known after apply)
+ cloud = "<cloud>"
+ display_name = "test_cluster"
+ http_endpoint = (known after apply)
+ id = (known after apply)
+ kind = (known after apply)
+ rbac_crn = (known after apply)
+ region = "<region>"
+ basic {}
+ environment {
+ id = "env-12345"
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Before, I was running with terraform 0.14.8.
Thanks again! :)
Currently, we use the below code to spin a kafka cluster. (As an example, we are spinning a basic cluster below for a lower environment)
If the code above has to be re-used in a higher environment and the need is for a standard/dedicated cluster and not basic, this setup cannot be used as-is, since the cluster type has to be updated in the code. This is even while having workspaces created for different environments.
Only way to do this right now - is to have a separate directory and repeat the code except for the cluster type.
It would help a lot if the cluster type had the ability to be passed/read as an argument. Something along the lines of
Thanks.