Closed cloudcafetech closed 4 years ago
Able to fix that issue.
But after replication noticed that topic name changed in target (sync-cluster)
[root@prod-cluster kafka]# oc get k
NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS
prod-cluster 3 3
[root@prod-cluster kafka]# oc get ku
NAME AUTHENTICATION AUTHORIZATION
kafka-acl-viewer tls simple
order-confirmation-mail tls simple
prod-bridge tls simple
shipment-api tls simple
super-user tls
webshop tls simple
[root@prod-cluster kafka]# oc get kt
NAME PARTITIONS REPLICATION FACTOR
mm2-offset-syncs.sync-cluster.internal 1 3
sales 10 3
shipments 10 2
time-tracking 1 1
users 10 3
[root@sync-cluster kafka]# oc get k
NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS
sync-cluster 3 3
[root@sync-cluster kafka]# oc get ku
NAME AUTHENTICATION AUTHORIZATION
kafka-acl-viewer tls simple
super-user tls
[root@sync-cluster kafka]# oc get kt
NAME PARTITIONS REPLICATION FACTOR
consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a 50 3
heartbeats 1 3
mirrormaker2-cluster-configs 1 3
mirrormaker2-cluster-offsets 25 3
mirrormaker2-cluster-status 5 3
prod-cluster.checkpoints.internal 1 3
prod-cluster.sales 10 3
prod-cluster.shipments 10 3
prod-cluster.time-tracking 1 3
prod-cluster.users 10 3
one more observation Data not replicating ..
Few Question from my end ...
MM2 is always renaming topics. Since Strimzi 0.20.0 you would be able to use the IdenttiyReplication policy to keep the names the same (see how it will looks like int he CR: https://github.com/strimzi/strimzi-kafka-operator/blob/0600fd441a5652718a40b303485a9710f62994d7/examples/mirror-maker/kafka-mirror-maker-2-custom-replication-policy.yaml#L27). But that is not supported in 0.19.
To replicate users / ACLs you would need to copy the KafkaUser. Or alternatively you would need something like OAuth authentication and Keycloak authorization with once central Keycloak server.
You should also be careful about the topics since MM2 will not replicate the replication factor but use the default in the cluster instead. You can see how the replication factor changes in your topics in the outputs you copy pasted above.
Thanks for reply,
once topic replication done, I put some data in source topic and after long wait also I observed no data replication in target topic. so I want to point out data not replicating.
As you mentioned its going to introduce in the IdenttiyReplication policy to keep the names in 0.20.0, I expect that not going to mandatory features. Because this (renaming topic) helps us to prevent topic corruption (due to duplicate) when we will do bidirectional MM2.
As you mentioned its going to introduce in the IdenttiyReplication policy to keep the names in 0.20.0, I expect that not going to mandatory features. Because this (renaming topic) helps us to prevent topic corruption (due to duplicate) when we will do bidirectional MM2.
Yes, that would be optional ... as I pointed to the example YAML.
Any update on below ..
once topic replication done, I put some data in source topic and after long wait also I observed no data replication in target topic. so I want to point out data not replicating.
This may be a manifestation of a Kafka bug that is described here: https://github.com/strimzi/strimzi-kafka-operator/issues/3688
Does data begin replicating properly if you reduce the spec.replicas field to 1? (You may need to wait a few minutes for it to start after reducing the replicas)
you mean to say replicas (3 to 1)? ... will try
can I add refresh.topics.internval.seconds: 30 in spec.config field ? and test ... not sure does it support ?
To change the refresh.topics.interval.seconds
setting , add it to your KafkaMirrorMaker2 CR spec.mirrors.sourceConnector.config e.g.:
...
mirrors:
- sourceCluster: "source"
sourceConnector:
config:
refresh.topics.interval.seconds: 60
Tested, working ...
Thanks Andwer 👍
I am running two different Kafka clusters (prod-cluster & sync-cluster) in two different Kubernetes Clusters. Both are identical apart from name and endpoints. Only added monitoring in prod-cluster. And Trying to Mirrormaker2 from prod-cluster to sync-cluster.
Only generated ca.crt from sync-cluster and copied to prod-cluster and finally created secret (
sync-cluster-cluster-ca-cert
) in prod-cluster using command (kubectl create secret generic sync-cluster-cluster-ca-cert --from-file=ca.crt
)