terraform-aws-modules / terraform-aws-msk-kafka-cluster

Terraform module to create AWS MSK (Managed Streaming for Kafka) resources 🇺🇦
https://registry.terraform.io/modules/terraform-aws-modules/msk-kafka-cluster/aws
Apache License 2.0
55 stars 53 forks source link

aws_msk_configuration failed during AWS MSK version upgrade #16

Closed ascpikmin closed 5 months ago

ascpikmin commented 9 months ago

Description

When I try to update the Kafka version on the module, the aws_msk_configuration resource fails because this version change requires its destruction, and this is not possible because it is being used by the msk cluster.

If your request is for a new feature, please use the Feature request template.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

Reproduction Code [Required]

module "msk_cluster" {

  depends_on = [module.s3_bucket_for_logs, module.cluster_sg, module.kms]
  source     = "github.com/terraform-aws-modules/terraform-aws-msk-kafka-cluster?ref=v2.3.0"

  name                   = local.msk_cluster_name
  kafka_version          = var.kafka_version
  number_of_broker_nodes = var.number_of_broker_nodes
  enhanced_monitoring    = var.enhanced_monitoring

  broker_node_client_subnets  = var.broker_node_client_subnets
  broker_node_instance_type   = var.broker_node_instance_type
  broker_node_security_groups = concat([
    for sg in module.cluster_sg :sg.security_group_id
  ], var.extra_security_groups_ids)

  broker_node_storage_info = {
    ebs_storage_info = { volume_size = var.volume_size }
  }

  encryption_in_transit_client_broker = var.encryption_in_transit_client_broker
  encryption_in_transit_in_cluster    = var.encryption_in_transit_in_cluster
  encryption_at_rest_kms_key_arn      = module.kms.key_arn

  jmx_exporter_enabled                   = var.jmx_exporter_enabled
  node_exporter_enabled                  = var.node_exporter_enabled
  cloudwatch_logs_enabled                = var.cloudwatch_logs_enabled
  s3_logs_enabled                        = var.s3_logs_enabled
  s3_logs_bucket                         = module.s3_bucket_for_logs.s3_bucket_id
  s3_logs_prefix                         = var.s3_logs_prefix
  cloudwatch_log_group_retention_in_days = var.cloudwatch_log_group_retention_in_days
  cloudwatch_log_group_kms_key_id        = var.cloudwatch_log_group_kms_key_id
  configuration_server_properties        = var.configuration_server_properties
  configuration_name                     = "${local.msk_cluster_name}-${replace(var.kafka_version,".","-")}"
  configuration_description              = local.msk_cluster_name

  tags = merge(
    var.tags,
    {
      Name = local.msk_cluster_name
    }
  )
}

Steps to reproduce the behavior:

Expected behavior

That the new aws_msk_configuration resource be created before deleting the old one

Actual behavior

The previous aws_msk_configuration resource tries to be deleted before creating the new one and it cannot because it is being used by the cluster

Terminal Output Screenshot(s)

module.msk_cluster.aws_msk_configuration.this[0]: Destroying... [id=arn:aws:kafka:eu-west-1:xxxxxxxxxxxxxx:configuration/example/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx]

Error: deleting MSK Configuration (arn:aws:kafka:eu-west-1:xxxxxxxxxxxxxx:configuration/example/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx): BadRequestException: Configuration is in use by one or more clusters. Dissociate the configuration from the clusters.
 {
   RespMetadata: {
     StatusCode: 400,
     RequestID: "0bb9fe5d-ee26-4dad-8a81-8c3fa6c06483"
   },
   InvalidParameter: "arn",
   Message_: "Configuration is in use by one or more clusters. Dissociate the configuration from the clusters."
 }

Additional context

If you set the configuration_name parameter to a dynamic name and manually change the aws_msk_configuration resource and add a lifecycle {create_before_destroy = true}, it updates successfully, so I don't know if this would be the solution.

bryantbiggs commented 8 months ago

I think we can support that here if you would like to open a PR for it

GreggSchofield commented 8 months ago

@bryantbiggs I am interested in resolving this issue as well! I have opened a pull-request which I believe solves the issue. Please could you review this?

All the best,

Gregg

github-actions[bot] commented 7 months ago

This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days

GreggSchofield commented 7 months ago

Hi @bryantbiggs is there any chance you can take a look at https://github.com/terraform-aws-modules/terraform-aws-msk-kafka-cluster/pull/17? Cheers

mvoitko commented 7 months ago

@bryantbiggs you might have a closer look at already opened and closed PRs. This issue may have been long fixed.

github-actions[bot] commented 6 months ago

This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days

github-actions[bot] commented 5 months ago

This issue was automatically closed because of stale in 10 days

github-actions[bot] commented 4 months ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.