bitnami / containers

Bitnami container images
https://bitnami.com
Other
2.93k stars 4.2k forks source link

[bitnami/kafka] Kafka MirrorMaker #67025

Closed florintene closed 1 week ago

florintene commented 2 weeks ago

Name and Version

bitnami/kafka:3.7

What architecture are you using?

amd64

What steps will reproduce the bug?

Hi,

I'm not sure if this is a bug from the image or a bug coming from Apache Kafka. The issue is that MirrorMaker on KafkaConnect doesn't replicate the Consumer Groups and Offsets.

The same configuration tested with bitnami/kafka:3.2 does work and the configuration with bitnami/kafka:3.6 and 3.7 does not.

I'm running in the following configuration:

Setting up the CPC, HBC and MSC connectors via the KafkaConnect.

As a result topics are being replicated but the consumer groups are not.

I've posted below the MM2 settings and I would expect is not a configuration problem since it does work with 3.2 but not with latest versions - and couldn't find anything in the docs that might trigger this behaviour change.

What is the expected behavior?

Both the topics as well as consumer groups are replicated across to the target environment.

What do you see instead?

Only the topics are being replicated, no errors in the logs but the consumer groups are not replicated

Additional information

The docker-compose is mainly using the following:

x-kafka-source-env-common: &kafka-source-env-common
  ALLOW_PLAINTEXT_LISTENER: 'yes'
  KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: 'true'
  KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: 0@kafka-source-0:9093,1@kafka-source-1:9093
  KAFKA_KRAFT_CLUSTER_ID: abcdefghijklmnopqrstuv
  KAFKA_CFG_PROCESS_ROLES: controller,broker
  KAFKA_CFG_CONTROLLER_LISTENER_NAMES: CONTROLLER
  KAFKA_CFG_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093
  EXTRA_ARGS: "-Xms128m -Xmx256m -javaagent:/opt/jmx-exporter/jmx_prometheus_javaagent-0.20.0.jar=9404:/opt/jmx-exporter/kafka-2_0_0.yml"

x-kafka-target-env-common: &kafka-target-env-common
  ALLOW_PLAINTEXT_LISTENER: 'yes'
  KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: 'true'
  KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: 0@kafka-target-0:9093,1@kafka-target-1:9093
  KAFKA_KRAFT_CLUSTER_ID: abcdef123jklmnopqrstuv
  KAFKA_CFG_PROCESS_ROLES: controller,broker
  KAFKA_CFG_CONTROLLER_LISTENER_NAMES: CONTROLLER
  KAFKA_CFG_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093
  EXTRA_ARGS: "-Xms128m -Xmx256m -javaagent:/opt/jmx-exporter/jmx_prometheus_javaagent-0.20.0.jar=9404:/opt/jmx-exporter/kafka-2_0_0.yml"

and the MM2 workers are as following:

CPC:

{
  "name": "mm2-cpc-source-0",
  "connector.class": "org.apache.kafka.connect.mirror.MirrorCheckpointConnector",
  "key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
  "value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
  "source.cluster.alias": "source-0",
  "source.cluster.bootstrap.servers": "kafka-source-0:9092,kafka-source-1:9092",
  "target.cluster.alias": "target-0",
  "target.cluster.bootstrap.servers": "kafka-target-0:9092,kafka-target-1:9092",
  "groups": ".*",
  "checkpoints.topic.replication.factor":2,
  "emit.checkpoints.enabled":true,
  "emit.checkpoints.interval.seconds":10,
  "refresh.groups.enabled": true,
  "refresh.groups.interval.seconds": 10,
  "sync.group.offsets.enabled": true,
  "sync.group.offsets.interval.seconds": 10,
  "replication.policy.class": "org.apache.kafka.connect.mirror.IdentityReplicationPolicy",
  "topics.exclude":".*[-.]internal,.*.replica,__.*,.*-config,.*-status,.*-offset",
  "groups.exclude":"console-consumer-.*,connect-.*,__.*"
}

HBC:

{
  "name": "mm2-hbc-source-0",
  "connector.class": "org.apache.kafka.connect.mirror.MirrorHeartbeatConnector",
  "key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
  "value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
  "source.cluster.alias": "source-0",
  "source.cluster.bootstrap.servers": "kafka-source-0:9092,kafka-source-1:9092",
  "target.cluster.alias": "target-0",
  "target.cluster.bootstrap.servers": "kafka-target-0:9092,kafka-target-1:9092",
  "emit.heartbeats.enabled": true,
  "emit.heartbeats.interval.seconds":5,
  "heartbeats.topic.replication.factor":2,
  "replication.policy.class": "org.apache.kafka.connect.mirror.IdentityReplicationPolicy"
}

MSC:

{
  "name": "mm2-msc-source-0",
  "connector.class": "org.apache.kafka.connect.mirror.MirrorSourceConnector",
  "key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
  "value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
  "source.cluster.alias": "source-0",
  "source.cluster.bootstrap.servers": "kafka-source-0:9092,kafka-source-1:9092",
  "target.cluster.alias": "target-0",
  "target.cluster.bootstrap.servers": "kafka-target-0:9092,kafka-target-1:9092",
  "replication.policy.class": "org.apache.kafka.connect.mirror.IdentityReplicationPolicy",
  "offset.lag.max":10,
  "offset-syncs.topic.replication.factor":2,
  "refresh.topics.enabled":true,
  "refresh.topics.interval.seconds":5,
  "refresh.groups.enabled": true,
  "refresh.groups.interval.seconds": 10,
  "topics.exclude":".*[-.]internal,.*.replica,__.*,.*-config,.*-status,.*-offset",
  "emit.checkpoints.enabled":true,
  "groups.exclude":"console-consumer-.*,connect-.*,__.*",
  "consumer.auto.offset.reset":"latest",
  "replication.factor": 2,
  "sync.topic.acls.enabled": true,
  "sync.topic.acls.interval.seconds":600,
  "sync.topic.configs.enabled": true,
  "sync.topic.configs.interval.seconds":5,  
  "topics": ".*",
  "groups": ".*",
  "tasks.max": 1,
  "consumer.group.id":".*"
}

Happy to close it if is not a relevant issue here and we believe is an Apache Kafka issue.

javsalgar commented 2 weeks ago

Hi,

From our side we did not perform any major change in the Kafka configuration, so I'm not sure that the issue has to do with the Bitnami packaging of Kafka. My advice would be to first check with the upstream Kafka devs to see if it's a change on their side.

florintene commented 2 weeks ago

Hi @javsalgar,

Thank you for the reply. I'll follow up with the Kafka devs to see. I think we can close this one.

Thanks