confluentinc / terraform-provider-confluent

Terraform Provider for Confluent
Apache License 2.0
31 stars 64 forks source link

confluent_kafka_topic always show configuration drift for unchanged Topic config #99

Closed robertusnegoro closed 2 years ago

robertusnegoro commented 2 years ago

I am trying to make some changes on another topic configuration, but terraform plan result always showing configuration diff for topic that not being changed/updated.

As example below, I have 3 topic, I only change topic-c retention from 7 days to 15 days. But both topic-a and topic-b are also said to be updated as well.

Example :

terraform {
  required_version = "~> 1.2.0"

  required_providers {
    confluent = {
      source  = "confluentinc/confluent"
      version = "1.4.0"
    }
  }
}

locals {
  retention = {
    _7days  = 604800000
    _15days = 1296000000
  }
  kafka_topics = {
    "topic-a" = {
      partitions_count    = 8
      min_insync_replicas = 2
      retention_ms        = local.retention._7days
    }
    "topic-b" = {
      partitions_count    = 16
      min_insync_replicas = 2
      retention_ms        = local.retention._7days
    }
    "topic-c" = {
      partitions_count    = 8
      min_insync_replicas = 2
      retention_ms        = local.retention._15days
    }
  }
}

resource "confluent_kafka_topic" "dev_topics" {
  kafka_cluster {
    id = confluent_kafka_cluster.kafka-cluster-dev.id
  }
  for_each         = local.kafka_topics
  topic_name       = each.key
  partitions_count = each.value.partitions_count
  config = {
    "min.insync.replicas"    = each.value.min_insync_replicas
    "retention.ms"           = each.value.retention_ms
    "message.timestamp.type" = "CreateTime"
    "segment.bytes"          = "536870912"
    "max.message.bytes"      = "2097164"
  }
  rest_endpoint = confluent_kafka_cluster.kafka-cluster-dev.rest_endpoint
  credentials {
    key    = confluent_api_key.dev_rw_api_key.id
    secret = confluent_api_key.dev_rw_api_key.secret
  }
}

Log :

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # confluent_kafka_topic.dev_topics["topic-a"] will be updated in-place
  ~ resource "confluent_kafka_topic" "dev_topics" {
      ~ config           = {
          + "max.message.bytes"      = "2097164"
          + "retention.ms"           = "604800000"
            # (3 unchanged elements hidden)
        }
        id               = "lkc-6kz1xx/topic-a"
        # (3 unchanged attributes hidden)

        # (2 unchanged blocks hidden)
    }

  # confluent_kafka_topic.dev_topics["topic-b"] will be updated in-place
  ~ resource "confluent_kafka_topic" "dev_topics" {
      ~ config           = {
          + "max.message.bytes"      = "2097164"
          + "retention.ms"           = "604800000"
            # (3 unchanged elements hidden)
        }
        id               = "lkc-6kz1xx/topic-b"
        # (3 unchanged attributes hidden)

        # (2 unchanged blocks hidden)
    }

  # confluent_kafka_topic.dev_topics["topic-c"] will be updated in-place
  ~ resource "confluent_kafka_topic" "dev_topics" {
      ~ config           = {
          + "max.message.bytes"      = "2097164"
          ~ "retention.ms"           = "1296000000" -> "604800000"
            # (3 unchanged elements hidden)
        }
        id               = "lkc-6kz1xx/topic-c"
        # (3 unchanged attributes hidden)

        # (2 unchanged blocks hidden)
    }
Plan: 0 to add, 3 to change, 0 to destroy.

Step to reproduce :

terraform plan
linouk23 commented 2 years ago

@robertusnegoro thanks for opening an issue!

Could you split your example TF configuration into "before change" and "after change" so it's easier to reproduce?

robertusnegoro commented 2 years ago

@linouk23 Sure, here is the before change code :

locals {
  retention = {
    _7days  = 604800000
    _15days = 1296000000
  }
  kafka_topics = {
    "topic-a" = {
      partitions_count    = 8
      min_insync_replicas = 2
      retention_ms        = local.retention._7days
    }
    "topic-b" = {
      partitions_count    = 16
      min_insync_replicas = 2
      retention_ms        = local.retention._7days
    }
    "topic-c" = {
      partitions_count    = 8
      min_insync_replicas = 2
      retention_ms        = local.retention._7days
    }
  }
}

resource "confluent_kafka_topic" "dev_topics" {
  kafka_cluster {
    id = confluent_kafka_cluster.kafka-cluster-dev.id
  }
  for_each         = local.kafka_topics
  topic_name       = each.key
  partitions_count = each.value.partitions_count
  config = {
    "min.insync.replicas"    = each.value.min_insync_replicas
    "retention.ms"           = each.value.retention_ms
    "message.timestamp.type" = "CreateTime"
    "segment.bytes"          = "536870912"
    "max.message.bytes"      = "2097164"
  }

The after code is exactly the same with the snippet I wrote on first / issue starter above.

linouk23 commented 2 years ago

I tried out with

$ terraform version 
Terraform v0.14.0

and everything works as expected 🤔 :

✗ terraform apply --auto-approve
confluent_kafka_topic.dev_topics["topic-a"]: Creating...
confluent_kafka_topic.dev_topics["topic-c"]: Creating...
confluent_kafka_topic.dev_topics["topic-b"]: Creating...
confluent_kafka_topic.dev_topics["topic-b"]: Still creating... [10s elapsed]
confluent_kafka_topic.dev_topics["topic-c"]: Still creating... [10s elapsed]
confluent_kafka_topic.dev_topics["topic-a"]: Still creating... [10s elapsed]
confluent_kafka_topic.dev_topics["topic-c"]: Creation complete after 11s [id=lkc-xxmrrx/topic-c]
confluent_kafka_topic.dev_topics["topic-a"]: Creation complete after 12s [id=lkc-xxmrrx/topic-a]
confluent_kafka_topic.dev_topics["topic-b"]: Creation complete after 12s [id=lkc-xxmrrx/topic-b]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

✗ terraform plan                
confluent_kafka_topic.dev_topics["topic-b"]: Refreshing state... [id=lkc-xxmrrx/topic-b]
confluent_kafka_topic.dev_topics["topic-a"]: Refreshing state... [id=lkc-xxmrrx/topic-a]
confluent_kafka_topic.dev_topics["topic-c"]: Refreshing state... [id=lkc-xxmrrx/topic-c]

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

✗ terraform plan
confluent_kafka_topic.dev_topics["topic-b"]: Refreshing state... [id=lkc-xxmrrx/topic-b]
confluent_kafka_topic.dev_topics["topic-a"]: Refreshing state... [id=lkc-xxmrrx/topic-a]
confluent_kafka_topic.dev_topics["topic-c"]: Refreshing state... [id=lkc-xxmrrx/topic-c]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # confluent_kafka_topic.dev_topics["topic-c"] will be updated in-place
  ~ resource "confluent_kafka_topic" "dev_topics" {
      ~ config           = {
          ~ "retention.ms"           = "604800000" -> "1296000000"
            # (4 unchanged elements hidden)
        }
        id               = "lkc-xxmrrx/topic-c"
        # (3 unchanged attributes hidden)

        # (2 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

✗ terraform apply --auto-approve
confluent_kafka_topic.dev_topics["topic-b"]: Refreshing state... [id=lkc-xxmrrx/topic-b]
confluent_kafka_topic.dev_topics["topic-c"]: Refreshing state... [id=lkc-xxmrrx/topic-c]
confluent_kafka_topic.dev_topics["topic-a"]: Refreshing state... [id=lkc-xxmrrx/topic-a]
confluent_kafka_topic.dev_topics["topic-c"]: Modifying... [id=lkc-xxmrrx/topic-c]
confluent_kafka_topic.dev_topics["topic-c"]: Still modifying... [id=lkc-xxmrrx/topic-c, 10s elapsed]
confluent_kafka_topic.dev_topics["topic-c"]: Modifications complete after 10s [id=lkc-xxmrrx/topic-c]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
✗ terraform plan                
confluent_kafka_topic.dev_topics["topic-b"]: Refreshing state... [id=lkc-xxmrrx/topic-b]
confluent_kafka_topic.dev_topics["topic-a"]: Refreshing state... [id=lkc-xxmrrx/topic-a]
confluent_kafka_topic.dev_topics["topic-c"]: Refreshing state... [id=lkc-xxmrrx/topic-c]

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

Let me try again with 1.2.9.

linouk23 commented 2 years ago

Same thing with 1.2.9:

✗ terraform version
Terraform v1.2.9
on darwin_amd64
+ provider registry.terraform.io/confluentinc/confluent v1.4.0

✗ terraform apply --auto-approve
confluent_kafka_topic.dev_topics["topic-c"]: Creating...
confluent_kafka_topic.dev_topics["topic-b"]: Creating...
confluent_kafka_topic.dev_topics["topic-a"]: Creating...
confluent_kafka_topic.dev_topics["topic-a"]: Still creating... [10s elapsed]
confluent_kafka_topic.dev_topics["topic-c"]: Still creating... [10s elapsed]
confluent_kafka_topic.dev_topics["topic-b"]: Still creating... [10s elapsed]
confluent_kafka_topic.dev_topics["topic-b"]: Creation complete after 11s [id=lkc-xxmrrx/topic-b]
confluent_kafka_topic.dev_topics["topic-c"]: Creation complete after 11s [id=lkc-xxmrrx/topic-c]
confluent_kafka_topic.dev_topics["topic-a"]: Creation complete after 11s [id=lkc-xxmrrx/topic-a]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
✗ terraform plan                
confluent_kafka_topic.dev_topics["topic-a"]: Refreshing state... [id=lkc-xxmrrx/topic-a]
confluent_kafka_topic.dev_topics["topic-b"]: Refreshing state... [id=lkc-xxmrrx/topic-b]
confluent_kafka_topic.dev_topics["topic-c"]: Refreshing state... [id=lkc-xxmrrx/topic-c]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

✗ terraform plan
confluent_kafka_topic.dev_topics["topic-b"]: Refreshing state... [id=lkc-xxmrrx/topic-b]
confluent_kafka_topic.dev_topics["topic-c"]: Refreshing state... [id=lkc-xxmrrx/topic-c]
confluent_kafka_topic.dev_topics["topic-a"]: Refreshing state... [id=lkc-xxmrrx/topic-a]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # confluent_kafka_topic.dev_topics["topic-c"] will be updated in-place
  ~ resource "confluent_kafka_topic" "dev_topics" {
      ~ config           = {
          ~ "retention.ms"           = "604800000" -> "1296000000"
            # (4 unchanged elements hidden)
        }
        id               = "lkc-xxmrrx/topic-c"
        # (3 unchanged attributes hidden)

        # (2 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.

✗ terraform apply --auto-approve
confluent_kafka_topic.dev_topics["topic-b"]: Refreshing state... [id=lkc-xxmrrx/topic-b]
confluent_kafka_topic.dev_topics["topic-a"]: Refreshing state... [id=lkc-xxmrrx/topic-a]
confluent_kafka_topic.dev_topics["topic-c"]: Refreshing state... [id=lkc-xxmrrx/topic-c]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # confluent_kafka_topic.dev_topics["topic-c"] will be updated in-place
  ~ resource "confluent_kafka_topic" "dev_topics" {
      ~ config           = {
          ~ "retention.ms"           = "604800000" -> "1296000000"
            # (4 unchanged elements hidden)
        }
        id               = "lkc-xxmrrx/topic-c"
        # (3 unchanged attributes hidden)

        # (2 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.
confluent_kafka_topic.dev_topics["topic-c"]: Modifying... [id=lkc-xxmrrx/topic-c]
confluent_kafka_topic.dev_topics["topic-c"]: Still modifying... [id=lkc-xxmrrx/topic-c, 10s elapsed]
confluent_kafka_topic.dev_topics["topic-c"]: Modifications complete after 11s [id=lkc-xxmrrx/topic-c]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
✗ terraform plan                
confluent_kafka_topic.dev_topics["topic-a"]: Refreshing state... [id=lkc-xxmrrx/topic-a]
confluent_kafka_topic.dev_topics["topic-b"]: Refreshing state... [id=lkc-xxmrrx/topic-b]
confluent_kafka_topic.dev_topics["topic-c"]: Refreshing state... [id=lkc-xxmrrx/topic-c]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

@robertusnegoro could you confirm all 3 topics were created via TF? There's a well known friction that one have to specify all topic settings when importing a topic from UI but that doesn't seem to be a case here.

robertusnegoro commented 2 years ago

Apparently there is one or two topics outside of these three were created not from TF code, such as topic which automatically created when we create a managed connector. But these 3 topics is purely from the TF.

Another stuff (but I am not sure if it's related), these three topics also used by confluent-replicator as both source and destination.

linouk23 commented 2 years ago

2 things:

  1. Could you confirm that terraform plan (after the 1st terraform apply) show 0 changes?
  2. OK one way to get more debug data would be to use API to get a list of topic settings (for any topic) after terraform apply and see that for every topic setting, source is set to DYNAMIC_TOPIC_CONFIG:
    {
    "kind": "KafkaTopicConfigList",
    "metadata": {
    "self": ".../topics/ui_orders2/configs",
    "next": null
    },
    "data": [
    {
    {
    ...,
    "cluster_id": "lkc-m2rg1q",
    "name": "cleanup.policy",
    "value": "delete",
    "is_read_only": false,
    "is_sensitive": false,
    "source": "DYNAMIC_TOPIC_CONFIG",
    "synonyms": [
    {
      "name": "cleanup.policy",
      "value": "delete",
      "source": "DYNAMIC_TOPIC_CONFIG"
    },
    {
      "name": "log.cleanup.policy",
      "value": "delete",
      "source": "DEFAULT_CONFIG"
    }
    ],
    "topic_name": "ui_orders2",
    "is_default": false
    },

Here's a list of target topic settings:

    "min.insync.replicas"
    "retention.ms"
    "message.timestamp.type" 
    "segment.bytes"   
    "max.message.bytes"
robertusnegoro commented 2 years ago
  1. I can confirm that after first apply, terraform plan will show 0 changes. But after few minutes, it will show the permanent diff case again.
  2. Here the output of 1 topic sample
{
  "kind": "KafkaTopic",
  "metadata": {
    "self": "https://pkc-xxxx.us-east1.gcp.confluent.cloud/kafka/v3/clusters/lkc-xxxx/topics/topic-a",
    "resource_name": "crn:///kafka=lkc-xxxx/topic=topic-a"
  },
  "cluster_id": "lkc-xxxx",
  "topic_name": "topic-a",
  "is_internal": false,
  "replication_factor": 3,
  "partitions_count": 2,
  "partitions": {
    "related": "https://pkc-xxxx.us-east1.gcp.confluent.cloud/kafka/v3/clusters/lkc-xxxx/topics/topic-a/partitions"
  },
  "configs": {
    "related": "https://pkc-xxxx.us-east1.gcp.confluent.cloud/kafka/v3/clusters/lkc-xxxx/topics/topic-a/configs"
  },
  "partition_reassignments": {
    "related": "https://pkc-xxxx.us-east1.gcp.confluent.cloud/kafka/v3/clusters/lkc-xxxx/topics/topic-a/partitions/-/reassignment"
  },
  "authorized_operations": []
}
linouk23 commented 2 years ago
  1. Will it show 0 diff when running terraform plan in like 5 minutes too?

  2. That's a great start! Could you send GET to configs.value (https://pkc-xxxx.us-east1.gcp.confluent.cloud/kafka/v3/clusters/lkc-xxxx/topics/topic-a/configs) that will list topic settings?

robertusnegoro commented 2 years ago
  1. No, it will show the diff on unchanged topics configs. "No change" will only show at very first terraform plan and if I ran it quickly after apply finished.
  2. Sure, here I got
  {
    "kind": "KafkaTopicConfig",
    "metadata": {
      "self": "https://pkc-xxxx.us-east1.gcp.confluent.cloud/kafka/v3/clusters/lkc-xxxx/topics/topic-a/configs/min.insync.replicas",
      "resource_name": "crn:///kafka=lkc-xxxx/topic=topic-a/config=min.insync.replicas"
    },
    "cluster_id": "lkc-xxxx",
    "name": "min.insync.replicas",
    "value": "2",
    "is_read_only": false,
    "is_sensitive": false,
    "source": "DYNAMIC_TOPIC_CONFIG",
    "synonyms": [
      {
        "name": "min.insync.replicas",
        "value": "2",
        "source": "DYNAMIC_TOPIC_CONFIG"
      },
      {
        "name": "min.insync.replicas",
        "value": "2",
        "source": "STATIC_BROKER_CONFIG"
      },
      {
        "name": "min.insync.replicas",
        "value": "1",
        "source": "DEFAULT_CONFIG"
      }
    ],
    "topic_name": "topic-a",
    "is_default": false
  },
  {
    "kind": "KafkaTopicConfig",
    "metadata": {
      "self": "https://pkc-xxxx.us-east1.gcp.confluent.cloud/kafka/v3/clusters/lkc-xxxx/topics/topic-a/configs/retention.ms",
      "resource_name": "crn:///kafka=lkc-xxxx/topic=topic-a/config=retention.ms"
    },
    "cluster_id": "lkc-xxxx",
    "name": "retention.ms",
    "value": "604800000",
    "is_read_only": false,
    "is_sensitive": false,
    "source": "DYNAMIC_TOPIC_CONFIG",
    "synonyms": [
      {
        "name": "retention.ms",
        "value": "604800000",
        "source": "DYNAMIC_TOPIC_CONFIG"
      }
    ],
    "topic_name": "topic-a",
    "is_default": false
  },
  {
    "kind": "KafkaTopicConfig",
    "metadata": {
      "self": "https://pkc-xxxx.us-east1.gcp.confluent.cloud/kafka/v3/clusters/lkc-xxxx/topics/topic-a/configs/message.timestamp.type",
      "resource_name": "crn:///kafka=lkc-xxxx/topic=topic-a/config=message.timestamp.type"
    },
    "cluster_id": "lkc-xxxx",
    "name": "message.timestamp.type",
    "value": "CreateTime",
    "is_read_only": false,
    "is_sensitive": false,
    "source": "DYNAMIC_TOPIC_CONFIG",
    "synonyms": [
      {
        "name": "message.timestamp.type",
        "value": "CreateTime",
        "source": "DYNAMIC_TOPIC_CONFIG"
      },
      {
        "name": "log.message.timestamp.type",
        "value": "CreateTime",
        "source": "DEFAULT_CONFIG"
      }
    ],
    "topic_name": "topic-a",
    "is_default": false
  },
  {
    "kind": "KafkaTopicConfig",
    "metadata": {
      "self": "https://pkc-xxxx.us-east1.gcp.confluent.cloud/kafka/v3/clusters/lkc-xxxx/topics/topic-a/configs/segment.bytes",
      "resource_name": "crn:///kafka=lkc-xxxx/topic=topic-a/config=segment.bytes"
    },
    "cluster_id": "lkc-xxxx",
    "name": "segment.bytes",
    "value": "536870912",
    "is_read_only": false,
    "is_sensitive": false,
    "source": "DYNAMIC_TOPIC_CONFIG",
    "synonyms": [
      {
        "name": "segment.bytes",
        "value": "536870912",
        "source": "DYNAMIC_TOPIC_CONFIG"
      },
      {
        "name": "log.segment.bytes",
        "value": "104857600",
        "source": "STATIC_BROKER_CONFIG"
      },
      {
        "name": "log.segment.bytes",
        "value": "1073741824",
        "source": "DEFAULT_CONFIG"
      }
    ],
    "topic_name": "topic-a",
    "is_default": false
  },
  {
    "kind": "KafkaTopicConfig",
    "metadata": {
      "self": "https://pkc-xxxx.us-east1.gcp.confluent.cloud/kafka/v3/clusters/lkc-xxxx/topics/topic-a/configs/max.message.bytes",
      "resource_name": "crn:///kafka=lkc-xxxx/topic=topic-a/config=max.message.bytes"
    },
    "cluster_id": "lkc-xxxx",
    "name": "max.message.bytes",
    "value": "2097164",
    "is_read_only": false,
    "is_sensitive": false,
    "source": "STATIC_BROKER_CONFIG",
    "synonyms": [
      {
        "name": "message.max.bytes",
        "value": "2097164",
        "source": "STATIC_BROKER_CONFIG"
      },
      {
        "name": "message.max.bytes",
        "value": "1048588",
        "source": "DEFAULT_CONFIG"
      }
    ],
    "topic_name": "topic-a",
    "is_default": false
  }
robertusnegoro commented 2 years ago

@linouk23 I tried to keep the config.value before and after "re-apply". And compare them using diff command. Here what I got :

❯ diff before-apply.json after-apply.json
221c221
<       "source": "STATIC_BROKER_CONFIG",
---
>       "source": "DYNAMIC_TOPIC_CONFIG",
223a224,228
>           "name": "max.message.bytes",
>           "value": "2097164",
>           "source": "DYNAMIC_TOPIC_CONFIG"
>         },
>         {
bluedog13 commented 2 years ago

I think it's because of the below missing topic configuration values in your code. Include them in the "config" block for the topics.

These are the default confluent cloud configurations for topics. If you don't include them while creating the topic - you may see the above behavior.

    "delete.retention.ms"                   = "86400000"
    "max.compaction.lag.ms"                 = "9223372036854775807"
    "message.timestamp.difference.max.ms"   = "9223372036854775807"
    "message.timestamp.type"                = "CreateTime"
    "min.compaction.lag.ms"                 = "0"
    "retention.bytes"                       = "-1"
    "segment.bytes"                         = "104857600"
    "segment.ms"                            = "604800000"
robertusnegoro commented 2 years ago

@bluedog13 it even add up the "supposed to be unchanges" lines to be like this :

# confluent_kafka_topic.dev_topics["topic-a"] will be updated in-place
  ~ resource "confluent_kafka_topic" "dev_topics" {
      ~ config           = {
          + "delete.retention.ms"                 = "86400000"
          + "max.compaction.lag.ms"               = "9223372036854775807"
          + "message.timestamp.difference.max.ms" = "9223372036854775807"
          + "min.compaction.lag.ms"               = "0"
          + "retention.bytes"                     = "-1"
          ~ "segment.bytes"                       = "536870912" -> "104857600"
          + "segment.ms"                          = "604800000"
            # (4 unchanged elements hidden)
        }
        id               = "lkc-xxxx/topic-a"
        # (3 unchanged attributes hidden)

        # (2 unchanged blocks hidden)
    }
linouk23 commented 2 years ago

@linouk23 I tried to keep the config.value before and after "re-apply". And compare them using diff command. Here what I got

Thanks for waiting!

Could you try to override max.message.bytes to a default value of 2097164 when creating a topic (or override it for an existing topic) to make this topic setting DYNAMIC_TOPIC_CONFIG.

robertusnegoro commented 2 years ago

@linouk23 I tried to keep the config.value before and after "re-apply". And compare them using diff command. Here what I got

Thanks for waiting!

Could you try to override max.message.bytes to a default value of 2097164 when creating a topic (or override it for an existing topic) to make this topic setting DYNAMIC_TOPIC_CONFIG.

It's actually what I set right now. Got max.message.bytes equal to2097164

linouk23 commented 2 years ago

Sounds great! Does terraform plan command show any difference still?

If yes,

  1. Could you rerun https://pkc-xxxx.us-east1.gcp.confluent.cloud/kafka/v3/clusters/lkc-xxxx/topics/topic-a/configs and copy the settings for conflicting values.
  2. Share config block of a confluent_kafka_topic.
  3. Share config block from terraform.tfstate file.
robertusnegoro commented 2 years ago

Plan output :

# confluent_kafka_topic.dev_topics["topic-a"] will be updated in-place
  ~ resource "confluent_kafka_topic" "dev_topics" {
      ~ config           = {
          + "delete.retention.ms"                 = "86400000"
          + "max.compaction.lag.ms"               = "9223372036854775807"
          + "max.message.bytes"                   = "2097164"
          + "message.timestamp.difference.max.ms" = "9223372036854775807"
          + "min.compaction.lag.ms"               = "0"
          + "retention.bytes"                     = "-1"
          + "retention.ms"                        = "604800000"
          + "segment.ms"                          = "604800000"
            # (3 unchanged elements hidden)
        }
        id               = "lkc-xxxx/topic-a"
        # (3 unchanged attributes hidden)

        # (2 unchanged blocks hidden)
    }

TF code :

terraform {
  required_version = "~> 1.2.0"

  required_providers {
    confluent = {
      source  = "confluentinc/confluent"
      version = "1.4.0"
    }
  }
}

locals {
  retention = {
    _7days  = 604800000
    _15days = 1296000000
  }
  kafka_topics = {
    "topic-a" = {
      partitions_count    = 8
      min_insync_replicas = 2
      retention_ms        = local.retention._7days
    }
    "topic-b" = {
      partitions_count    = 16
      min_insync_replicas = 2
      retention_ms        = local.retention._7days
    }
    "topic-c" = {
      partitions_count    = 8
      min_insync_replicas = 2
      retention_ms        = local.retention._15days
    }
  }
}

resource "confluent_kafka_topic" "dev_topics" {
  kafka_cluster {
    id = confluent_kafka_cluster.kafka-cluster-dev.id
  }
  for_each         = local.kafka_topics
  topic_name       = each.key
  partitions_count = each.value.partitions_count
  config = {
    "delete.retention.ms"                 = "86400000"
    "max.compaction.lag.ms"               = "9223372036854775807"
    "max.message.bytes"                   = "2097164"
    "message.timestamp.difference.max.ms" = "9223372036854775807"
    "message.timestamp.type"              = "CreateTime"
    "min.compaction.lag.ms"               = "0"
    "min.insync.replicas"                 = each.value.min_insync_replicas
    "retention.bytes"                     = "-1"
    "retention.ms"                        = each.value.retention_ms
    "segment.bytes"                       = "536870912"
    "segment.ms"                          = "604800000"
  }
  rest_endpoint = confluent_kafka_cluster.kafka-cluster-dev.rest_endpoint
  credentials {
    key    = confluent_api_key.dev_rw_api_key.id
    secret = confluent_api_key.dev_rw_api_key.secret
  }
}

API call to config definition : example conflicting value default.retention.ms

{
      "kind": "KafkaTopicConfig",
      "metadata": {
        "self": "https://pkc-xxxx.us-east1.gcp.confluent.cloud/kafka/v3/clusters/lkc-xxxx/topics/topic-a/configs/delete.retention.ms",
        "resource_name": "crn:///kafka=lkc-xxxx/topic=topic-a/config=delete.retention.ms"
      },
      "cluster_id": "lkc-xxxx",
      "name": "delete.retention.ms",
      "value": "86400000",
      "is_read_only": false,
      "is_sensitive": false,
      "source": "DEFAULT_CONFIG",
      "synonyms": [
        {
          "name": "log.cleaner.delete.retention.ms",
          "value": "86400000",
          "source": "DEFAULT_CONFIG"
        }
      ],
      "topic_name": "topic-a",
      "is_default": true
    }

Relevant tfstate block

{
          "index_key": "topic-a",
          "schema_version": 2,
          "attributes": {
            "config": {
              "delete.retention.ms": "86400000",
              "max.compaction.lag.ms": "9223372036854775807",
              "max.message.bytes": "2097164",
              "message.timestamp.difference.max.ms": "9223372036854775807",
              "message.timestamp.type": "CreateTime",
              "min.compaction.lag.ms": "0",
              "min.insync.replicas": "2",
              "retention.bytes": "-1",
              "retention.ms": "604800000",
              "segment.bytes": "536870912",
              "segment.ms": "604800000"
            },
            "credentials": [
              {
                "key": "redacted",
                "secret": "redacted"
              }
            ],
            "id": "lkc-xxxx/topic-a",
            "kafka_cluster": [
              {
                "id": "lkc-xxxx"
              }
            ],
            "partitions_count": 4,
            "rest_endpoint": "https://pkc-xxxx.us-east1.gcp.confluent.cloud:443",
            "topic_name": "topic-a"
          },
          "sensitive_attributes": [
            [
              {
                "type": "get_attr",
                "value": "credentials"
              },
              {
                "type": "index",
                "value": {
                  "value": 0,
                  "type": "number"
                }
              },
              {
                "type": "get_attr",
                "value": "secret"
              }
            ]
          ],
          "private": "redacted",
          "dependencies": [
            "confluent_api_key.dev_rw_api_key",
            "confluent_kafka_cluster.kafka-cluster-dev"
          ]
        },
linouk23 commented 2 years ago

Thanks for attaching all these snippets! I've got a quick question: is that accurate that first you run terraform apply to create these 3 topics, then waited for like 20 minutes and then rerun terraform plan to see this diff? cc @robertusnegoro

If that's correct, seeing DEFAULT_CONFIG for delete.retention.ms is definitely a little bit unexpected.

robertusnegoro commented 2 years ago

I think less than 10 minutes after terraform apply, then I will get the diff.

linouk23 commented 2 years ago

@robertusnegoro could you send your Kafka Cluster ID (lkc-xyz123) to cflt-tf-access@confluent.io? I might know where the issue is but might need to double check.

linouk23 commented 2 years ago

@robertusnegoro also if possible please share the output of that TF diff that appeared like after 10 minutes from the initial apply (or the list of target topic settings that you could see TF diff for with before and after values).

Alternatively, can I use

# confluent_kafka_topic.dev_topics["topic-a"] will be updated in-place
  ~ resource "confluent_kafka_topic" "dev_topics" {
      ~ config           = {
          + "delete.retention.ms"                 = "86400000"
          + "max.compaction.lag.ms"               = "9223372036854775807"
          + "max.message.bytes"                   = "2097164"
          + "message.timestamp.difference.max.ms" = "9223372036854775807"
          + "min.compaction.lag.ms"               = "0"
          + "retention.bytes"                     = "-1"
          + "retention.ms"                        = "604800000"
          + "segment.ms"                          = "604800000"
            # (3 unchanged elements hidden)
        }
        id               = "lkc-xxxx/topic-a"
        # (3 unchanged attributes hidden)

        # (2 unchanged blocks hidden)
    }

as an example of TF diff (seems like there're quite a few of topic settings that were ignored by TF initially) 🤔

The scenario I was specifically searching for is setting a topic setting foo to a custom value 123 but later on it shows a diff where expected value was 456.

robertusnegoro commented 2 years ago

So the permanent diff is actually caused by Confluent Replicator. I just realize that Replicator trying to sync the topic configuration too. I end up to test both cluster with exact same retention.ms which previously left default on the source cluster. And the terraform plan outuput is clean now.

Sorry for the misleading circumstances. I will close the issue now