Closed GreggSchofield closed 5 months ago
Note that when upgrading an existing cluster from v2.3.0 to the head of my fork it will result in a plan like the following:
Terraform will perform the following actions:
# module.complete_mks_cluster.aws_msk_cluster.this[0] will be updated in-place
~ resource "aws_msk_cluster" "this" {
id = "arn:aws:kafka:eu-west-1:account-id:cluster/data-streaming-eu-west-1-stable-int-cluster/cluster-id"
tags = {}
# (13 unchanged attributes hidden)
~ configuration_info {
~ arn = "arn:aws:kafka:eu-west-1:account-id:configuration/data-streaming-eu-west-1-stable-int-configuration/89f1e362-f4e0-4964-ae38-12fae754a66c-8" -> (known after apply)
# (1 unchanged attribute hidden)
}
# (6 unchanged blocks hidden)
}
# module.complete_mks_cluster.aws_msk_configuration.this[0] must be replaced
+/- resource "aws_msk_configuration" "this" {
~ arn = "arn:aws:kafka:eu-west-1:account-id:configuration/data-streaming-eu-west-1-stable-int-configuration/configuration-id" -> (known after apply)
~ id = "arn:aws:kafka:eu-west-1:account-id:configuration/data-streaming-eu-west-1-stable-int-configuration/configuration-id" -> (known after apply)
~ latest_revision = 1 -> (known after apply)
~ name = "data-streaming-eu-west-1-stable-int-configuration" # forces replacement -> (known after apply) # forces replacement
# (1 unchanged attribute hidden)
}
# module.complete_mks_cluster.random_id.this will be created
+ resource "random_id" "this" {
+ b64_std = (known after apply)
+ b64_url = (known after apply)
+ byte_length = 8
+ dec = (known after apply)
+ hex = (known after apply)
+ id = (known after apply)
}
Plan: 2 to add, 1 to change, 1 to destroy.
Whilst this shouldn't constitute a breaking change in the module API, let me know if you want this documented somewhere.
When this PR can be merged?
@antonbabenko why does no one review this PR?
@GreggSchofield welcome to the club) https://github.com/terraform-aws-modules/terraform-aws-msk-kafka-cluster/pull/12 @bryantbiggs @nawarajshahi I hope you now have evidence that the configuration should change the version.
This PR has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this PR will be closed in 10 days
This seems to be a reasonable solution to the problem that a lot of us have faced, as specified on the PR description. Do we have an ongoing discussion somewhere else or are we waiting for something else?
@GreggSchofield @mvoitko @bryantbiggs
This PR is included in version 2.5.0 :tada:
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Are there issues when upgrading MSK clusters with this module that result in naming conflicts similar to what this PR was intended to solve? - #40
Description
Motivation and Context
For the latest version of this module v2.3.0, an operator cannot change the MSK configuration for a cluster which has already been created with this module. As pointed out by @ascpikmin in issue #16, the
aws_msk_configuration
resource requires alifecycle
block withcreate_before_destroy = true
set. This in turn requires thename
attribute of theaws_msk_configuration
resource to be unique.This pull-request aims to resolve issue #16.
Breaking Changes
This change preserves backwards compatibility with the current major version.
How Has This Been Tested?
This has been tested by executing a
terraform apply
using the following module declaration:then setting the
configuration_server_properties
attribute to{"auto.create.topics.enable" = true}
to force a new MSK Cluster configuration version to be created :Given the current composition of this module, in particular the fact that the
aws_msk_configuration
resource is created within the module scope, executingterraform plan
will yield a single in-place update:Only once this has been applied, can the operator then execute
terraform plan
again to yield the desired in-place update for the cluster itself:examples/*
to demonstrate and validate my change(s)examples/*
projectspre-commit run -a
on my pull requesthttps://github.com/terraform-aws-modules/terraform-aws-msk-kafka-cluster/assets/28576265/1c35263e-ea3e-4081-b5cc-36efa80fb5e2