kafka-ops / julie

A solution to help you build automation and gitops in your Apache Kafka deployments. The Kafka gitops!
MIT License
418 stars 113 forks source link

Julie removes Placement Contstraints #501

Open Fobhep opened 2 years ago

Fobhep commented 2 years ago

Describe the bug Boker has a default placement constraint for new topics. Julie-Ops respects that when deploying a new "blank" topic, but removes it when run again.

To Reproduce Deploy this descriptor:

context: "context"
source: "src"
projects:
  - name: "name"
    topics:
      - name: "topic2"

and check config with kafka-topics:

kafka-topics --bootstrap-server broker1-participant-0.kafka:9093 --command-config julie.properties --describe --topic context.src.name.topic2`

Topic: context.src.name.topic2  TopicId: urMxYNH_SSOGbV7sr1LVog PartitionCount: 1   ReplicationFactor: 3    Configs: compression.type=snappy,min.insync.replicas=2,segment.bytes=1073741824,retention.ms=3600000,confluent.placement.constraints={"version":1,"replicas":[{"count":1,"constraints":{"rack":"rack-1"}},{"count":1,"constraints":{"rack":"rack-2"}},{"count":1,"constraints":{"rack":"rack-3"}}],"observers":[]}
    Topic: context.src.name.topic2  Partition: 0    Leader: 1   Replicas: 1,2,3 Isr: 1,2,3  Offline: 

Rerun julie - log indicates that config is going to be deleted:

{
  "Operation" : "com.purbon.kafka.topology.actions.topics.UpdateTopicConfigAction",
  "Topic" : "context.src.name.topic2",
  "Action" : "update",
  "Changes" : {
    "DeletedConfigs" : {
      "confluent.placement.constraints" : "{\"version\":1,\"replicas\":[{\"count\":1,\"constraints\":{\"rack\":\"rack-1\"}},{\"count\":1,\"constraints\":{\"rack\":\"rack-2\"}},{\"count\":1,\"constraints\":{\"rack\":\"rack-3\"}}],\"observers\":[]}"
    }
  }
}

Checking with kafka-topics again confirms that config was deleted:

Topic: context.src.name.topic2  TopicId: urMxYNH_SSOGbV7sr1LVog PartitionCount: 1   ReplicationFactor: 3    Configs: compression.type=snappy,min.insync.replicas=2,segment.bytes=1073741824,retention.ms=3600000
    Topic: context.src.name.topic2  Partition: 0    Leader: 1   Replicas: 1,2,3 Isr: 1,2,3  Offline: 

Expected behavior Julie should never delete config that is set per default from broker side!

Runtime (please complete the following information):

purbon commented 2 years ago

Thanks a lot for your report @Fobhep as always very much appreciated it. My current way of thinking is to introduce something like https://github.com/kafka-ops/julie/blob/master/src/main/java/com/purbon/kafka/topology/Constants.java#L6 but for configs.

In your case, you have this config introduced automatically either the cluster or an external tool. Which case is yours?

Thanks a lot for your continous help in the project.

purbon commented 2 years ago

question, why not manage placement constraints with JulieOps,

---
context: "o"
projects:
  - name: "f"
    consumers:
      - principal: "User:NewApp2"
    topics:
      - name: "t"
        config:
          confluent.placement.constraints:  "{\"version\":1,\"replicas\":[{\"count\":1,\"constraints\":{\"rack\":\"rack-1\"}},{\"count\":1,\"constraints\":{\"rack\":\"rack-2\"}}],\"observers\":[]}"
$ docker exec kafka kafka-topics --bootstrap-server kafka:29092 \                                                        2.7.0
                  --describe --topic o.f.t
Topic: o.f.t    TopicId: dJImanTbSd2sbUjLVDMoVA PartitionCount: 1   ReplicationFactor: 2    Configs: confluent.placement.constraints={"version":1,"replicas":[{"count":1,"constraints":{"rack":"rack-1"}},{"count":1,"constraints":{"rack":"rack-2"}}],"observers":[]}
    Topic: o.f.t    Partition: 0    Leader: 1   Replicas: 1,2   Isr: 1,2    Offline:

do you see a limitation for this operationally? I understand and tested when the config is there, no problem with being deleted.

What do you think?

removing the label bug for now until we're clear about the reason and causes behind the issue.

purbon commented 2 years ago

related to https://github.com/kafka-ops/julie/issues/241