hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.84k stars 9.18k forks source link

MSK Configuration ConflictException #8898

Closed navinsnn53 closed 5 years ago

navinsnn53 commented 5 years ago

Hi all,

I have created MSK_cluster with custom configuration. I have destroyed the already launched MSK_cluster but still, it shows below error. Can you guys guide me where I am wrong? From AWS end there is no option to see existing configuration without the creation of the cluster.

"Error: error creating MSK Configuration: ConflictException: A resource with this name already exists. status code: 409, request id: 4165917f-88ee-11e9-b19e-7f35f422cc65

on msk.tf line 29, in resource "aws_msk_configuration" "sample": 29: resource "aws_msk_configuration" "sample" { . "

Also just by using the resource "aws_msk_configuration" will it be linked to resource "aws_msk_cluster" ?? here is my below code for reference:

MSK_Cluster Creation

resource "aws_msk_cluster" "msk" { cluster_name = "TestingCluster" kafka_version = "2.1.0" number_of_broker_nodes = 3

broker_node_group_info { instance_type = "kafka.m5.large" client_subnets = [ "${var.subnet_c}", "${var.subnet_b}", "${var.subnet_a}", ] ebs_volume_size = 5 security_groups = ["${var.msk_sg}"] } tags = { Name = "TestingCluster" } }

MSK Configuration

resource "aws_msk_configuration" "msk" { kafka_versions = ["2.1.0"] name = "sample" server_properties = <<PROPERTIES auto.create.topics.enable = true delete.topic.enable = true PROPERTIES } ~

bflad commented 5 years ago

Hi @navinsnn53 👋

I have destroyed the already launched MSK_cluster but still, it shows below error. Can you guys guide me where I am wrong?

In MSK, Clusters and Configurations are two separate pieces of functionality. The same Configuration can be assigned to multiple Clusters.

The MSK Configuration API does not currently have a method to delete MSK Configurations. If you would like this functionality, please create a feature request through an AWS Support Case or contact your AWS account managers if you have any.

From AWS end there is no option to see existing configuration without the creation of the cluster.

The AWS CLI supports listing configurations on the latest version:

$ aws kafka list-configurations
{
    "Configurations": [
        {
            "Arn": "arn:aws:kafka:us-west-2:--OMITTED--:configuration/kk7h810o7o7wqb67xbljacjs90yfdpx4xdh3ujc44zld4ywdtl17qjlho96gdgm3u/764ca18b-72f2-40a9-be58-40c702dad622-3",
            "CreationTime": "2019-05-22T17:30:16.49Z",
            "KafkaVersions": [
                "2.1.0"
            ],
            "LatestRevision": {
                "CreationTime": "2019-05-22T17:30:16.49Z",
                "Revision": 1
            },
            "Name": "kk7h810o7o7wqb67xbljacjs90yfdpx4xdh3ujc44zld4ywdtl17qjlho96gdgm3u"
        },
...

To get this functionality added within the AWS console, you will also need to reach out to AWS Support.

Also just by using the resource "aws_msk_configuration" will it be linked to resource "aws_msk_cluster" ??

No, you must configure this in the aws_msk_cluster resource using the configuration_info configuration block, e.g.

resource "aws_msk_configuration" "example" {
  # ... other configuration ...
}

resource "aws_msk_cluster" "example" {
  # ... other configuration ..

  configuration_info {
    arn      = "${aws_msk_configuration.example.arn}"
    revision = "${aws_msk_configuration.example.latest_revision}"
  }
}

Hope this helps. If you're looking for general assistance, please note that we use GitHub issues in this repository for tracking bugs and enhancements with the Terraform AWS Provider codebase rather than for questions. While we may be able to help with certain simple problems here it's generally better to use one of the community forums where there are far more people ready to help, whereas the GitHub issues here are generally monitored only by a few maintainers and dedicated community members interested in code development of the Terraform AWS Provider itself.

navinsnn53 commented 5 years ago

Thanks soo much. Also can you please suggest some open Community forum for terraform so that I can stop posting to GitHub for any of my support related queries.

bflad commented 5 years ago

The HashiCorp Discuss site was recently released which will likely supersede some of the forums listed on the current Terraform community page.

navinsnn53 commented 5 years ago

Thanks

cdenneen commented 5 years ago

@bflad since there is no way to delete the configuration. After doing a terraform destroy now the name must be modified in order to do a terraform apply again. Part of me wishes it would sync the state like terraform import and then would apply without the following error:

Error: error creating MSK Configuration: ConflictException: A resource with this name already exists.
    status code: 409, request id: REDACTED

  on example_msk.tf line 58, in resource "aws_msk_configuration" "config1":
  58: resource "aws_msk_configuration" "config1" {

I will be raising the aws kafka delete-configuration to our AWS Support Team but can you think of:

resource "aws_msk_configuration" "config1" {
  kafka_versions = ["2.1.0"]
  name = "test-mskconfig-%[1]q" # Like some sort of dynamic variable or something

  server_properties = <<PROPERTIES
auto.create.topics.enable = true
delete.topic.enable = true
log.retention.ms = 259200000
PROPERTIES
}
cdenneen commented 5 years ago

@bflad tried to manually import in order for terraform apply to work but did not work:

work/msk » terraform show

work/msk » terraform import aws_msk_configuration.config1 test-mskconfig
aws_msk_configuration.config1: Importing from ID "test-mskconfig"...
aws_msk_configuration.config1: Import complete!
  Imported aws_msk_configuration
aws_msk_configuration.config1: Refreshing state... [id=test-mskconfig]

Error: error describing MSK Configuration (test-mskconfig): BadRequestException: One or more of the parameters are not valid.
    status code: 400, request id: REDACTED

work/msk » aws kafka list-configurations
{
    "Configurations": [
        {
            "Arn": "arn:aws:kafka:us-east-1:XXXXXXXXXXXX:configuration/kafkatest-config1/REDACTED",
            "CreationTime": "2019-06-11T20:01:53.9Z",
            "KafkaVersions": [
                "2.1.0"
            ],
            "LatestRevision": {
                "CreationTime": "2019-06-11T20:01:53.9Z",
                "Revision": 1
            },
            "Name": "kafkatest-config1"
        },
        {
            "Arn": "arn:aws:kafka:us-east-1:XXXXXXXX:configuration/test-mskconfig/REDACTED",
            "CreationTime": "2019-06-11T20:28:14.361Z",
            "KafkaVersions": [
                "2.1.0"
            ],
            "LatestRevision": {
                "CreationTime": "2019-06-11T20:28:14.361Z",
                "Revision": 1
            },
            "Name": "test-mskconfig"
        }
    ]
}
cdenneen commented 5 years ago

@bflad looks like import would require another parameter in this case in order to describe a revision... so would either need to specify a revision or default to latest.

So something like terraform import aws_msk_configuration.config1 test-mskconfig.1 or terraform import aws_msk_configuration.config1 test-mskconfig.latest

Pretty sure that's what the bad paramater is about since in order to get the actual config you need to get the ARN and then get the revision itself.

braunreyes commented 5 years ago

In case anyone is interested here is my workaround for this issue.

I have a yaml file like this

msk:
  cluster_configuration: |-
    auto.create.topics.enable = false
    delete.topic.enable = true

I then pass this into my msk module

  source = "./msk"
  config = file("${path.module}/config_${terraform.workspace}.yml")
}

in my module I reference the yaml like this:

locals {
  config = yamldecode(var.config).msk
}

and then configure the configuration resource like this

resource "aws_msk_configuration" "ccde_kafka" {
  kafka_versions = ["2.2.1"]
  name           = "ccde-kafka-${md5(local.config.cluster_configuration)}"
  description = "kafka configuration for ccde-kafka"
  server_properties = <<PROPERTIES
${local.config.cluster_configuration}
PROPERTIES
}

If for some reason md5 hash creates situation where name is too long, you could also use a manual version bump to trigger the new configuration.

Seems like given the limitations of the AWS msk API, terraform should only support name prefix for msk configuration and then if there is a change update the name suffix which triggers a new configuration to use for updating the cluster. Yes you could end up with a ton of configurations in AWS, but that is not a limitation of terraform, so not much you can do about it.

eupestov commented 5 years ago

Interesting approach @braunreyes. But won't it still fail in the case in question i.e. if you try to re-create the environment with the same configuration properties? I do not think there is a problem to update an existing configuration known to terraform - it will produce another revision you can update your cluster(s) with.

braunreyes commented 5 years ago

This was more in response to @cdenneen comment on Jun 13. I ran into issue where updating the configuration was not working because of the behavior of configuration name. My workaround was the only way it could create a fully idempotent process for updating the msk configuration and applying to cluster.

ghost commented 5 years ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!