Closed steven0711dong closed 1 year ago
I think it should be possible to run it with kn
ir YAML files?
And how you generate workload - so we test the same. I will try to run it with Strimzi.
And how you generate workload - so we test the same. I will try to run it with Strimzi.
To send events to your Kafka instance topic:
Clone this repo https://github.com/steven0711dong/KafkaProducer and the readme file contains instructions on how to send events
Were you able to utilize this?
I have got this setup ready to test with Strimzi topic with 30 partitions:
https://github.com/aslom/repdroduce-kafka-source
I am now working to get the Kafka producer to run with Strimzi setup
@steven0711dong what parameters do you use for KafkaProducer when testing different scenarios?
I got basic to work with Strimzi:
TOPIC=topic30 EVENTCT=1 PASSWORD="" go run ./KafkaProducer
total messages produced is: 1
total errored is: 0
Producer closed
It works now end-to-end with setup and kafka workoad job by simply doing
ko apply -f scenario1
and we can add more scenarios such as scenario-duplicates
trying to reproduce #2022 and others
@aslom The KafkaProducer only sends events, if you want to change the number of events sent, you can modify the EVENTCT. It will then sends events in async manner. Does this answer your question?
Test setup to re-produce duplicate issue: 1. use a topic of 50 partitions 2. set the sink delay to 10 seconds or above 6 seconds 3. send 5000 events
For the sink delay to 10 seconds environment variable delay set to 10?
To send 5000 events EVENTCT?
I have duplicates scenario for with those parameeters: https://github.com/aslom/repdroduce-kafka-source/tree/main/duplicates1
Run with
ko apply -f duplicates1
So far not seeing duplicates however my kafka source setup is a bit behind main and maybe missing configmap parmeters @steven0711dong ?
https://github.com/steven0711dong/eventing-kafka-broker/commit/d8e4773f7eb365cb42b81156c64f9c961fd472c5 <-- Steven's configmap changes
running tests wiht latest gtihub main branch without rate-limiting enabled - the setup used and how to run https://github.com/aslom/repdroduce-kafka-source/tree/main/duplicates1
ko apply -f duplicates1
got
kubectl run -it --rm --image=curlimages/curl curl -- sh
If you don't see a command prompt, try pressing enter.
/ $ curl http://kafkascraper/stats
source kafka-src50 received: 188 messages, 415 are dups and 0 non 200s.
and in logs:
k -n knative-eventing logs -l=app=kafka-source-dispatcher -f
{"@timestamp":"2022-06-07T17:34:42.56Z","@version":"1","message":"Removed egress egress.uid=c6f1a578-5606-4f0b-980d-57a2afc35390 resource.uid=c6f1a578-5606-4f0b-980d-57a2afc35390","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.main.ConsumerDeployerVerticle","thread_name":"vert.x-eventloop-thread-0","level":"INFO","level_value":20000,"egress.uid":"c6f1a578-5606-4f0b-980d-57a2afc35390","resource.uid":"c6f1a578-5606-4f0b-980d-57a2afc35390"}
{"@timestamp":"2022-06-07T17:34:42.561Z","@version":"1","message":"Reconciled contract generation contractGeneration=9","logger_name":"dev.knative.eventing.kafka.broker.core.reconciler.impl.ResourcesReconcilerMessageHandler","thread_name":"vert.x-eventloop-thread-0","level":"INFO","level_value":20000,"contractGeneration":9}
{"@timestamp":"2022-06-07T17:34:42.562Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50b-4, groupId=kafka-src50b] (Re-)joining group","logger_name":"org.apache.kafka.clients.consumer.internals.AbstractCoordinator","thread_name":"vert.x-kafka-consumer-thread-3","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:34:42.566Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50b-4, groupId=kafka-src50b] (Re-)joining group","logger_name":"org.apache.kafka.clients.consumer.internals.AbstractCoordinator","thread_name":"vert.x-kafka-consumer-thread-3","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:34:45.578Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50b-4, groupId=kafka-src50b] Successfully joined group with generation Generation{generationId=3, memberId='consumer-kafka-src50b-4-7763b067-f23c-426a-8219-c3da8418fb7e', protocol='range'}","logger_name":"org.apache.kafka.clients.consumer.internals.AbstractCoordinator","thread_name":"vert.x-kafka-consumer-thread-3","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:34:45.578Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50b-4, groupId=kafka-src50b] Finished assignment for group at generation 3: {consumer-kafka-src50b-4-7763b067-f23c-426a-8219-c3da8418fb7e=Assignment(partitions=[topic50-0])}","logger_name":"org.apache.kafka.clients.consumer.internals.ConsumerCoordinator","thread_name":"vert.x-kafka-consumer-thread-3","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:34:45.612Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50b-4, groupId=kafka-src50b] Successfully synced group in generation Generation{generationId=3, memberId='consumer-kafka-src50b-4-7763b067-f23c-426a-8219-c3da8418fb7e', protocol='range'}","logger_name":"org.apache.kafka.clients.consumer.internals.AbstractCoordinator","thread_name":"kafka-coordinator-heartbeat-thread | kafka-src50b","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:34:45.613Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50b-4, groupId=kafka-src50b] Notifying assignor about the new Assignment(partitions=[topic50-0])","logger_name":"org.apache.kafka.clients.consumer.internals.ConsumerCoordinator","thread_name":"vert.x-kafka-consumer-thread-3","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:34:45.613Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50b-4, groupId=kafka-src50b] Adding newly assigned partitions: topic50-0","logger_name":"org.apache.kafka.clients.consumer.internals.ConsumerCoordinator","thread_name":"vert.x-kafka-consumer-thread-3","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:34:45.617Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50b-4, groupId=kafka-src50b] Setting offset for partition topic50-0 to the committed offset FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[my-cluster-kafka-2.my-cluster-kafka-brokers.kafka.svc:9092 (id: 2 rack: null)], epoch=0}}","logger_name":"org.apache.kafka.clients.consumer.internals.ConsumerCoordinator","thread_name":"vert.x-kafka-consumer-thread-3","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:35:05.038Z","@version":"1","message":"Failed to commit topic partition topicPartition=TopicPartition{topic=topic50, partition=0} offset offset=93","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager","thread_name":"vert.x-eventloop-thread-3","level":"ERROR","level_value":40000,"stack_trace":"org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1139)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1004)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1490)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1438)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$commit$29(KafkaReadStreamImpl.java:611)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$submitTaskWhenStarted$3(KafkaReadStreamImpl.java:135)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","topicPartition":{"topic":"topic50","partition":0},"offset":93}
{"@timestamp":"2022-06-07T17:31:57.95Z","@version":"1","message":"Reconcile egress diff DiffResult{added=[], intersection=[], removed=[cfb6918c-cf63-440d-81b5-2c2b043a2913]}","logger_name":"dev.knative.eventing.kafka.broker.core.reconciler.impl.ResourcesReconcilerImpl","thread_name":"vert.x-eventloop-thread-0","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:31:57.95Z","@version":"1","message":"Stopping consumer","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.BaseConsumerVerticle","thread_name":"vert.x-eventloop-thread-3","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:31:57.951Z","@version":"1","message":"Close target=http://event-display.k9.svc.cluster.local inFlightRequests=0","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.http.WebClientCloudEventSender","thread_name":"vert.x-eventloop-thread-3","level":"INFO","level_value":20000,"target":"http://event-display.k9.svc.cluster.local","inFlightRequests":0}
{"@timestamp":"2022-06-07T17:31:57.955Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50a-3, groupId=kafka-src50a] Member consumer-kafka-src50a-3-92517a85-29ad-434c-8f1f-0a2c2ffe4faa sending LeaveGroup request to coordinator my-cluster-kafka-2.my-cluster-kafka-brokers.kafka.svc:9092 (id: 2147483645 rack: null) due to the consumer is being closed","logger_name":"org.apache.kafka.clients.consumer.internals.AbstractCoordinator","thread_name":"vert.x-kafka-consumer-thread-2","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:31:57.957Z","@version":"1","message":"Metrics scheduler closed","logger_name":"org.apache.kafka.common.metrics.Metrics","thread_name":"vert.x-kafka-consumer-thread-2","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:35:05.909Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50a-14, groupId=kafka-src50a] Attempt to heartbeat failed since group is rebalancing","logger_name":"org.apache.kafka.clients.consumer.internals.AbstractCoordinator","thread_name":"kafka-coordinator-heartbeat-thread | kafka-src50a","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:35:07.514Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50a-2, groupId=kafka-src50a] Failing OffsetCommit request since the consumer is not part of an active group","logger_name":"org.apache.kafka.clients.consumer.internals.ConsumerCoordinator","thread_name":"vert.x-kafka-consumer-thread-1","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-07T17:35:07.514Z","@version":"1","message":"Consumer exception","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.BaseConsumerVerticle","thread_name":"vert.x-kafka-consumer-thread-1","level":"ERROR","level_value":40000,"stack_trace":"org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1139)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1004)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1490)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1438)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$commit$29(KafkaReadStreamImpl.java:611)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$submitTaskWhenStarted$3(KafkaReadStreamImpl.java:135)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n"}
{"@timestamp":"2022-06-07T17:35:07.514Z","@version":"1","message":"Failed to commit topic partition topicPartition=TopicPartition{topic=topic50, partition=0} offset offset=94","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager","thread_name":"vert.x-eventloop-thread-2","level":"ERROR","level_value":40000,"stack_trace":"org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1139)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1004)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1490)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1438)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$commit$29(KafkaReadStreamImpl.java:611)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$submitTaskWhenStarted$3(KafkaReadStreamImpl.java:135)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","topicPartition":{"topic":"topic50","partition":0},"offset":94}
{"@timestamp":"2022-06-07T17:35:08.042Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50a-13, groupId=kafka-src50a] Failing OffsetCommit request since the consumer is not part of an active group","logger_name":"org.apache.kafka.clients.consumer.internals.ConsumerCoordinator","thread_name":"vert.x-kafka-consumer-thread-11","level":"INFO","level_value":20000}
kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --all-groups
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
__strimzi-topic-operator-kstreams __strimzi_store_topic 0 37 37 0 __strimzi-topic-operator-kstreams-4525f590-ff06-48b2-a9f0-c27adb556a92-StreamThread-1-consumer-d0ca2c9e-f358-4996-898b-017d14811e7b /172.30.196.149 __strimzi-topic-operator-kstreams-4525f590-ff06-48b2-a9f0-c27adb556a92-StreamThread-1-consumer
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
console-consumer-6408 topic50 0 - 0 - console-consumer-ba86ffba-ee01-4790-8caa-50bfa48a862b /172.30.196.184 console-consumer
Consumer group 'kafka-src30a' has no active members.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src30a topic30 0 45234 0 -45234 - - -
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50a topic50 0 100 0 -100 consumer-kafka-src50a-17-7db09cb5-12ab-4b8d-9427-51205b9d0b60 /172.30.125.149 consumer-kafka-src50a-17
kafka-src50a topic30 0 0 0 0 - - -
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50b topic50 0 0 0 0 consumer-kafka-src50b-4-7763b067-f23c-426a-8219-c3da8418fb7e /172.30.196.159 consumer-kafka-src50b-4
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
knative-group100 topic100 0 0 0 0 consumer-knative-group100-10-af8e1e66-68d3-4c2a-aaec-0536c5c29558 /172.30.125.149 consumer-knative-group100-10
pod "kafka-consumer" deleted
I did second run now getting those errors Failed to commit topic partition topicPartition=TopicPartition{topic=topic50, partition=0} offset offset=415","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager","
and lot of duplictes
kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
source kafka-src50 received: 438 messages, 4789 are dups and 0 non 200s.
pod "curl" deleted
k -n knative-eventing logs -l=app=kafka-source-dispatcher --max-log-requests 6 -f
...
{"@timestamp":"2022-06-08T15:37:28.092Z","@version":"1","message":"Failed to commit topic partition topicPartition=TopicPartition{topic=topic50, partition=0} offset offset=415","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager","thread_name":"vert.x-eventloop-thread-3","level":"ERROR","level_value":40000,"stack_trace":"org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1139)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1004)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1490)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1438)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$commit$29(KafkaReadStreamImpl.java:611)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$submitTaskWhenStarted$3(KafkaReadStreamImpl.java:135)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","topicPartition":{"topic":"topic50","partition":0},"offset":415}
{"@timestamp":"2022-06-08T15:37:28.657Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50a-19, groupId=kafka-src50a] Failing OffsetCommit request since the consumer is not part of an active group","logger_name":"org.apache.kafka.clients.consumer.internals.ConsumerCoordinator","thread_name":"vert.x-kafka-consumer-thread-18","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-08T15:37:28.658Z","@version":"1","message":"Consumer exception","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.BaseConsumerVerticle","thread_name":"vert.x-kafka-consumer-thread-18","level":"ERROR","level_value":40000,"stack_trace":"org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1139)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1004)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1490)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1438)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$commit$29(KafkaReadStreamImpl.java:611)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$submitTaskWhenStarted$3(KafkaReadStreamImpl.java:135)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n"}
{"@timestamp":"2022-06-08T15:37:28.658Z","@version":"1","message":"Failed to commit topic partition topicPartition=TopicPartition{topic=topic50, partition=0} offset offset=418","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager","thread_name":"vert.x-eventloop-thread-3","level":"ERROR","level_value":40000,"stack_trace":"org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1139)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1004)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1490)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1438)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$commit$29(KafkaReadStreamImpl.java:611)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$submitTaskWhenStarted$3(KafkaReadStreamImpl.java:135)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","topicPartition":{"topic":"topic50","partition":0},"offset":418}
{"@timestamp":"2022-06-08T15:37:28.877Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50a-20, groupId=kafka-src50a] Failing OffsetCommit request since the consumer is not part of an active group","logger_name":"org.apache.kafka.clients.consumer.internals.ConsumerCoordinator","thread_name":"vert.x-kafka-consumer-thread-19","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-08T15:37:28.877Z","@version":"1","message":"Consumer exception","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.BaseConsumerVerticle","thread_name":"vert.x-kafka-consumer-thread-19","level":"ERROR","level_value":40000,"stack_trace":"org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1139)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1004)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1490)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1438)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$commit$29(KafkaReadStreamImpl.java:611)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$submitTaskWhenStarted$3(KafkaReadStreamImpl.java:135)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n"}
{"@timestamp":"2022-06-08T15:37:28.877Z","@version":"1","message":"Failed to commit topic partition topicPartition=TopicPartition{topic=topic50, partition=0} offset offset=415","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager","thread_name":"vert.x-eventloop-thread-0","level":"ERROR","level_value":40000,"stack_trace":"org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1139)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1004)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1490)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1438)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$commit$29(KafkaReadStreamImpl.java:611)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$submitTaskWhenStarted$3(KafkaReadStreamImpl.java:135)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","topicPartition":{"topic":"topic50","partition":0},"offset":415}
It seems all events were consumed
kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50b
If you don't see a command prompt, try pressing enter.
Consumer group 'kafka-src50b' has no active members.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50b topic50 0 5000 5000 0 - - -
pod "kafka-consumer" deleted
Investigating strange problem with Strimzi - I wonder if anybody saw anythign like that before? see delete topic and then topic is still there:
k -n kafka get kafkatopics.kafka.strimzi.io topic50
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
topic50 my-cluster 1 3 True
k delete -f duplicates1/200-strimzi-topic50.yaml
kafkatopic.kafka.strimzi.io "topic50" deleted
k -n kafka get kafkatopics.kafka.strimzi.io topic50
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
topic50 my-cluster 1 3 True
or
k -n kafka get kafkatopics.kafka.strimzi.io topic50
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
topic50 my-cluster 50 3 True
k -n kafka delete kafkatopics.kafka.strimzi.io topic50
kafkatopic.kafka.strimzi.io "topic50" deleted
k -n kafka get kafkatopics.kafka.strimzi.io topic50
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
topic50 my-cluster 1 3 True
and its status looks ok:
k -n kafka get kafkatopics.kafka.strimzi.io topic50 -oyaml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
creationTimestamp: "2022-06-08T18:15:42Z"
generation: 1
labels:
strimzi.io/cluster: my-cluster
name: topic50
namespace: kafka
resourceVersion: "875865953"
uid: d4adff95-5175-429c-8190-466ee5b06069
spec:
config: {}
partitions: 1
replicas: 3
topicName: topic50
status:
conditions:
- lastTransitionTime: "2022-06-08T18:15:42.907762Z"
status: "True"
type: Ready
observedGeneration: 1
topicName: topic50
Using new topic name (topic50a) with replication factor 3 (Which is default) and then everything works fine getting 50 partitions ....
kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50b
If you don't see a command prompt, try pressing enter.
Consumer group 'kafka-src50b' has no active members.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50b topic50a 25 0 100 100 - - -
kafka-src50b topic50a 16 0 100 100 - - -
kafka-src50b topic50a 28 0 100 100 - - -
...
kafka-src50b topic50a 40 0 100 100 - - -
pod "kafka-consumer-groups" deleted
only problem is that data plane seems ot be stuck and delivers no events - nothing is read from any of those partitions
k get kafkasource.sources.knative.dev kafka-src50
NAME TOPICS BOOTSTRAPSERVERS READY REASON AGE
kafka-src50 ["topic50a"] ["my-cluster-kafka-bootstrap.kafka:9092"] True 9m12s
aslom@m:~/Documents/awsm/go/src/github.com/aslom/repdroduce-kafka-source|main ⇒ k get kafkasource.sources.knative.dev kafka-src50 -oyaml
apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"sources.knative.dev/v1beta1","kind":"KafkaSource","metadata":{"annotations":{},"name":"kafka-src50","namespace":"k9"},"spec":{"bootstrapServers":["my-cluster-kafka-bootstrap.kafka:9092"],"consumerGroup":"kafka-src50b","sink":{"ref":{"apiVersion":"serving.knative.dev/v1","kind":"Service","name":"event-display"}},"topics":["topic50a"]}}
creationTimestamp: "2022-06-08T18:20:57Z"
generation: 1
name: kafka-src50
namespace: k9
resourceVersion: "875875311"
uid: f515c6a6-ee84-4037-b56a-74a345ec845a
spec:
bootstrapServers:
- my-cluster-kafka-bootstrap.kafka:9092
consumerGroup: kafka-src50b
consumers: 1
initialOffset: latest
net:
sasl:
password: {}
type: {}
user: {}
tls:
caCert: {}
cert: {}
key: {}
sink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: event-display
namespace: k9
topics:
- topic50a
status:
conditions:
- lastTransitionTime: "2022-06-08T18:21:06Z"
status: "True"
type: ConsumerGroup
- lastTransitionTime: "2022-06-08T18:21:06Z"
status: "True"
type: Ready
- lastTransitionTime: "2022-06-08T18:21:06Z"
status: "True"
type: SinkProvided
consumers: 1
observedGeneration: 1
placements:
- podName: kafka-source-dispatcher-0
vreplicas: 1
selector: eventing.knative.dev/source=kafka-source-controller,eventing.knative.dev/sourceName=kafka-src50
sinkUri: http://event-display.k9.svc.cluster.local
One strange things is that the last message in log is about not finding event-display and then logs is not moving for last 15 minutes - nothing more in log
TZ=Z date
Wed Jun 8 18:35:47 UTC 2022
k -n knative-eventing logs -l=app=kafka-source-dispatcher --max-log-requests 10 -f
...
{"@timestamp":"2022-06-08T18:17:34.053Z","@version":"1","message":"failed to send event to subscriber target=http://event-display.k9.svc.cluster.local","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.http.WebClientCloudEventSender","thread_name":"vert.x-eventloop-thread-0","level":"ERROR","level_value":40000,"stack_trace":"java.net.UnknownHostException: Failed to resolve 'event-display.k9.svc.cluster.local' after 2 queries \n\tat io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:1047)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:1000)\n\tat io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:418)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:971)\n\tat io.netty.resolver.dns.DnsResolveContext.access$700(DnsResolveContext.java:66)\n\tat io.netty.resolver.dns.DnsResolveContext$2.operationComplete(DnsResolveContext.java:471)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)\n\tat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)\n\tat io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)\n\tat io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)\n\tat io.netty.resolver.dns.DnsQueryContext.trySuccess(DnsQueryContext.java:216)\n\tat io.netty.resolver.dns.DnsQueryContext.finish(DnsQueryContext.java:208)\n\tat io.netty.resolver.dns.DnsNameResolver$DnsResponseHandler.channelRead(DnsNameResolver.java:1314)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\tat io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:97)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","target":"http://event-display.k9.svc.cluster.local"}
{"@timestamp":"2022-06-08T18:17:35.617Z","@version":"1","message":"failed to send event to subscriber target=http://event-display.k9.svc.cluster.local","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.http.WebClientCloudEventSender","thread_name":"vert.x-eventloop-thread-3","level":"ERROR","level_value":40000,"stack_trace":"java.net.UnknownHostException: Failed to resolve 'event-display.k9.svc.cluster.local' after 2 queries \n\tat io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:1047)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:1000)\n\tat io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:418)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:971)\n\tat io.netty.resolver.dns.DnsResolveContext.access$700(DnsResolveContext.java:66)\n\tat io.netty.resolver.dns.DnsResolveContext$2.operationComplete(DnsResolveContext.java:471)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)\n\tat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)\n\tat io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)\n\tat io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)\n\tat io.netty.resolver.dns.DnsQueryContext.trySuccess(DnsQueryContext.java:216)\n\tat io.netty.resolver.dns.DnsQueryContext.finish(DnsQueryContext.java:208)\n\tat io.netty.resolver.dns.DnsNameResolver$DnsResponseHandler.channelRead(DnsNameResolver.java:1314)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\tat io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:97)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","target":"http://event-display.k9.svc.cluster.local"}
{"@timestamp":"2022-06-08T18:17:35.625Z","@version":"1","message":"failed to send event to subscriber target=http://event-display.k9.svc.cluster.local","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.http.WebClientCloudEventSender","thread_name":"vert.x-eventloop-thread-2","level":"ERROR","level_value":40000,"stack_trace":"java.net.UnknownHostException: Failed to resolve 'event-display.k9.svc.cluster.local' after 2 queries \n\tat io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:1047)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:1000)\n\tat io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:418)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:971)\n\tat io.netty.resolver.dns.DnsResolveContext.access$700(DnsResolveContext.java:66)\n\tat io.netty.resolver.dns.DnsResolveContext$2.operationComplete(DnsResolveContext.java:471)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)\n\tat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)\n\tat io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)\n\tat io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)\n\tat io.netty.resolver.dns.DnsQueryContext.trySuccess(DnsQueryContext.java:216)\n\tat io.netty.resolver.dns.DnsQueryContext.finish(DnsQueryContext.java:208)\n\tat io.netty.resolver.dns.DnsNameResolver$DnsResponseHandler.channelRead(DnsNameResolver.java:1314)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\tat io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:97)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","target":"http://event-display.k9.svc.cluster.local"}
{"@timestamp":"2022-06-08T18:17:35.821Z","@version":"1","message":"failed to send event to subscriber target=http://event-display.k9.svc.cluster.local","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.http.WebClientCloudEventSender","thread_name":"vert.x-eventloop-thread-3","level":"ERROR","level_value":40000,"stack_trace":"java.net.UnknownHostException: Failed to resolve 'event-display.k9.svc.cluster.local' after 2 queries \n\tat io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:1047)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:1000)\n\tat io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:418)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:971)\n\tat io.netty.resolver.dns.DnsResolveContext.access$700(DnsResolveContext.java:66)\n\tat io.netty.resolver.dns.DnsResolveContext$2.operationComplete(DnsResolveContext.java:471)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)\n\tat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)\n\tat io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)\n\tat io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)\n\tat io.netty.resolver.dns.DnsQueryContext.trySuccess(DnsQueryContext.java:216)\n\tat io.netty.resolver.dns.DnsQueryContext.finish(DnsQueryContext.java:208)\n\tat io.netty.resolver.dns.DnsNameResolver$DnsResponseHandler.channelRead(DnsNameResolver.java:1314)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\tat io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:97)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","target":"http://event-display.k9.svc.cluster.local"}
{"@timestamp":"2022-06-08T18:17:35.937Z","@version":"1","message":"failed to send event to subscriber target=http://event-display.k9.svc.cluster.local","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.http.WebClientCloudEventSender","thread_name":"vert.x-eventloop-thread-0","level":"ERROR","level_value":40000,"stack_trace":"java.net.UnknownHostException: Failed to resolve 'event-display.k9.svc.cluster.local' after 2 queries \n\tat io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:1047)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:1000)\n\tat io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:418)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:971)\n\tat io.netty.resolver.dns.DnsResolveContext.access$700(DnsResolveContext.java:66)\n\tat io.netty.resolver.dns.DnsResolveContext$2.operationComplete(DnsResolveContext.java:471)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)\n\tat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)\n\tat io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)\n\tat io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)\n\tat io.netty.resolver.dns.DnsQueryContext.trySuccess(DnsQueryContext.java:216)\n\tat io.netty.resolver.dns.DnsQueryContext.finish(DnsQueryContext.java:208)\n\tat io.netty.resolver.dns.DnsNameResolver$DnsResponseHandler.channelRead(DnsNameResolver.java:1314)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\tat io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:97)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","target":"http://event-display.k9.svc.cluster.local"}
{"@timestamp":"2022-06-08T18:17:35.938Z","@version":"1","message":"failed to send event to subscriber target=http://event-display.k9.svc.cluster.local","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.http.WebClientCloudEventSender","thread_name":"vert.x-eventloop-thread-3","level":"ERROR","level_value":40000,"stack_trace":"java.net.UnknownHostException: Failed to resolve 'event-display.k9.svc.cluster.local' after 2 queries \n\tat io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:1047)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:1000)\n\tat io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:418)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:971)\n\tat io.netty.resolver.dns.DnsResolveContext.access$700(DnsResolveContext.java:66)\n\tat io.netty.resolver.dns.DnsResolveContext$2.operationComplete(DnsResolveContext.java:471)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)\n\tat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)\n\tat io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)\n\tat io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)\n\tat io.netty.resolver.dns.DnsQueryContext.trySuccess(DnsQueryContext.java:216)\n\tat io.netty.resolver.dns.DnsQueryContext.finish(DnsQueryContext.java:208)\n\tat io.netty.resolver.dns.DnsNameResolver$DnsResponseHandler.channelRead(DnsNameResolver.java:1314)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\tat io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:97)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","target":"http://event-display.k9.svc.cluster.local"}
{"@timestamp":"2022-06-08T18:17:36.131Z","@version":"1","message":"failed to send event to subscriber target=http://event-display.k9.svc.cluster.local","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.http.WebClientCloudEventSender","thread_name":"vert.x-eventloop-thread-0","level":"ERROR","level_value":40000,"stack_trace":"java.net.UnknownHostException: Failed to resolve 'event-display.k9.svc.cluster.local' after 2 queries \n\tat io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:1047)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:1000)\n\tat io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:418)\n\tat io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:971)\n\tat io.netty.resolver.dns.DnsResolveContext.access$700(DnsResolveContext.java:66)\n\tat io.netty.resolver.dns.DnsResolveContext$2.operationComplete(DnsResolveContext.java:471)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)\n\tat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)\n\tat io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)\n\tat io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)\n\tat io.netty.resolver.dns.DnsQueryContext.trySuccess(DnsQueryContext.java:216)\n\tat io.netty.resolver.dns.DnsQueryContext.finish(DnsQueryContext.java:208)\n\tat io.netty.resolver.dns.DnsNameResolver$DnsResponseHandler.channelRead(DnsNameResolver.java:1314)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\tat io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:97)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","target":"http://event-display.k9.svc.cluster.local"}
{"@timestamp":"2022-06-08T18:21:06.156Z","@version":"1","message":"Set new contract contractGeneration=234","logger_name":"dev.knative.eventing.kafka.broker.core.reconciler.impl.ResourcesReconcilerMessageHandler","thread_name":"vert.x-eventloop-thread-0","level":"INFO","level_value":20000,"contractGeneration":234}
Thanks for the follow-up, @aslom can you please attach all the logs from start to finish and remove serving from the reproducer [1] since it just adds noise and I cannot run it?
Updated reproduce-kafka-source to run eventdisplay as k8s svc and runnign now tests with the latest code updated from main branch and deployed with ./hack/run.sh deploy-source
Initially I was getting 5 concurrent connections to sink
After running k scale --replicas=25 kafkasources/kafka-src50
I started ot get 10 concurrent connections but also receiving duplciates
kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
source kafka-src50 received: 279 messages, 11 are dups and 0 non 200s.
pod "curl" deleted
kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
source kafka-src50 received: 338 messages, 22 are dups and 0 non 200s.
Kafka source
k get kafkasource.sources.knative.dev kafka-src50 -oyaml
apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"sources.knative.dev/v1beta1","kind":"KafkaSource","metadata":{"annotations":{},"name":"kafka-src50","namespace":"k9"},"spec":{"bootstrapServers":["my-cluster-kafka-bootstrap.kafka:9092"],"consumerGroup":"kafka-src50c","sink":{"ref":{"apiVersion":"v1","kind":"Service","name":"eventdisplay"}},"topics":["topic50c"]}}
creationTimestamp: "2022-06-09T15:59:51Z"
generation: 2
name: kafka-src50
namespace: k9
resourceVersion: "878089754"
uid: 54d73909-2b21-41f8-9238-0e1603d2ee09
spec:
bootstrapServers:
- my-cluster-kafka-bootstrap.kafka:9092
consumerGroup: kafka-src50c
consumers: 25
initialOffset: latest
net:
sasl:
password: {}
type: {}
user: {}
tls:
caCert: {}
cert: {}
key: {}
sink:
ref:
apiVersion: v1
kind: Service
name: eventdisplay
namespace: k9
topics:
- topic50c
status:
conditions:
- lastTransitionTime: "2022-06-09T16:02:35Z"
status: "True"
type: ConsumerGroup
- lastTransitionTime: "2022-06-09T16:02:35Z"
status: "True"
type: Ready
- lastTransitionTime: "2022-06-09T15:59:52Z"
status: "True"
type: SinkProvided
consumers: 25
observedGeneration: 2
placements:
- podName: kafka-source-dispatcher-0
vreplicas: 13
- podName: kafka-source-dispatcher-1
vreplicas: 12
selector: eventing.knative.dev/source=kafka-source-controller,eventing.knative.dev/sourceName=kafka-src50
sinkUri: http://eventdisplay.k9.svc.cluster.local
I will upload logs when finished
Started to see
{"@timestamp":"2022-06-09T16:19:52.451Z","@version":"1","message":"Failed to commit topic partition topicPartition=TopicPartition{topic=topic50c, partition=22} offset offset=12","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager","thread_name":"vert.x-eventloop-thread-3","level":"ERROR","level_value":40000,"stack_trace":"org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1139)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1004)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1490)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1438)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$commit$29(KafkaReadStreamImpl.java:611)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$submitTaskWhenStarted$3(KafkaReadStreamImpl.java:135)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","topicPartition":{"topic":"topic50c","partition":22},"offset":12}
So far
kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
source kafka-src50 received: 923 messages, 147 are dups and 0 non 200s.
And here snapshot of consumer group form kafka broker:
kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 27 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 11 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 0 41 100 59 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 7 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 48 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 49 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 2 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 40 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 13 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 23 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 29 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 41 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 6 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 18 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 38 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 17 2 100 98 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 4 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 24 27 100 73 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 33 41 100 59 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 45 16 100 84 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 14 2 100 98 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 34 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 10 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 9 42 100 58 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 25 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 30 43 100 57 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 22 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 21 42 100 58 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 1 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 26 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 8 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 47 2 100 98 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 3 16 100 84 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 37 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 44 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 15 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 20 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 43 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 36 16 100 84 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 39 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 16 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 5 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 46 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 35 2 100 98 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 32 2 100 98 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 12 16 100 84 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 31 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 42 42 100 58 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 19 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 28 3 100 97 consumer-kafka-src50c-1-b5d108b9-fa0f-402f-ac3d-88470c3d0d3f /172.30.125.135 consumer-kafka-src50c-1
pod "kafka-consumer-groups" deleted
Then I scaled to what I think is maximum for 50 partitions
k scale --replicas=50 kafkasources/kafka-src50
kafkasource.sources.knative.dev/kafka-src50 scaled
k -n knative-eventing get po|grep kafka-source
kafka-source-controller-9598b4786-p89pw 1/1 Running 0 43m
kafka-source-dispatcher-0 1/1 Running 0 37m
kafka-source-dispatcher-1 1/1 Running 0 32m
kafka-source-dispatcher-2 1/1 Running 0 85s
k get kafkasource.sources.knative.dev kafka-src50 -oyaml
apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"sources.knative.dev/v1beta1","kind":"KafkaSource","metadata":{"annotations":{},"name":"kafka-src50","namespace":"k9"},"spec":{"bootstrapServers":["my-cluster-kafka-bootstrap.kafka:9092"],"consumerGroup":"kafka-src50c","sink":{"ref":{"apiVersion":"v1","kind":"Service","name":"eventdisplay"}},"topics":["topic50c"]}}
creationTimestamp: "2022-06-09T15:59:51Z"
generation: 3
name: kafka-src50
namespace: k9
resourceVersion: "878142589"
uid: 54d73909-2b21-41f8-9238-0e1603d2ee09
spec:
bootstrapServers:
- my-cluster-kafka-bootstrap.kafka:9092
consumerGroup: kafka-src50c
consumers: 50
initialOffset: latest
net:
sasl:
password: {}
type: {}
user: {}
tls:
caCert: {}
cert: {}
key: {}
sink:
ref:
apiVersion: v1
kind: Service
name: eventdisplay
namespace: k9
topics:
- topic50c
status:
conditions:
- lastTransitionTime: "2022-06-09T16:33:17Z"
status: "True"
type: ConsumerGroup
- lastTransitionTime: "2022-06-09T16:33:17Z"
status: "True"
type: Ready
- lastTransitionTime: "2022-06-09T15:59:52Z"
status: "True"
type: SinkProvided
consumers: 50
observedGeneration: 3
placements:
- podName: kafka-source-dispatcher-0
vreplicas: 15
- podName: kafka-source-dispatcher-1
vreplicas: 20
- podName: kafka-source-dispatcher-2
vreplicas: 15
selector: eventing.knative.dev/source=kafka-source-controller,eventing.knative.dev/sourceName=kafka-src50
sinkUri: http://eventdisplay.k9.svc.cluster.local
Fir short time seeing 15 concurrent connections and then back to 10
kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
source kafka-src50 received: 1624 messages, 458 are dups and 0 non 200s.
pod "curl" deleted
It seems rebalancing can take some time
kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
If you don't see a command prompt, try pressing enter.
Warning: Consumer group 'kafka-src50c' is rebalancing.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 27 3 100 97 - - -
kafka-src50c topic50c 11 16 100 84 - - -
kafka-src50c topic50c 0 48 100 52 - - -
kafka-src50c topic50c 7 3 100 97 - - -
kafka-src50c topic50c 48 3 100 97 - - -
kafka-src50c topic50c 49 3 100 97 - - -
kafka-src50c topic50c 2 3 100 97 - - -
kafka-src50c topic50c 40 3 100 97 - - -
kafka-src50c topic50c 13 3 100 97 - - -
kafka-src50c topic50c 23 16 100 84 - - -
kafka-src50c topic50c 29 44 100 56 - - -
kafka-src50c topic50c 41 44 100 56 - - -
kafka-src50c topic50c 6 3 100 97 - - -
kafka-src50c topic50c 18 3 100 97 - - -
kafka-src50c topic50c 38 43 100 57 - - -
kafka-src50c topic50c 17 40 100 60 - - -
kafka-src50c topic50c 4 3 100 97 - - -
kafka-src50c topic50c 24 35 100 65 - - -
kafka-src50c topic50c 33 49 100 51 - - -
kafka-src50c topic50c 45 16 100 84 - - -
kafka-src50c topic50c 14 2 100 98 - - -
kafka-src50c topic50c 34 3 100 97 - - -
kafka-src50c topic50c 10 3 100 97 - - -
kafka-src50c topic50c 9 50 100 50 - - -
kafka-src50c topic50c 25 3 100 97 - - -
kafka-src50c topic50c 30 51 100 49 - - -
kafka-src50c topic50c 22 3 100 97 - - -
kafka-src50c topic50c 21 50 100 50 - - -
kafka-src50c topic50c 1 3 100 97 - - -
kafka-src50c topic50c 26 3 100 97 - - -
kafka-src50c topic50c 8 44 100 56 - - -
kafka-src50c topic50c 47 2 100 98 - - -
kafka-src50c topic50c 3 16 100 84 - - -
kafka-src50c topic50c 37 3 100 97 - - -
kafka-src50c topic50c 44 16 100 84 - - -
kafka-src50c topic50c 15 3 100 97 - - -
kafka-src50c topic50c 20 30 100 70 - - -
kafka-src50c topic50c 43 3 100 97 - - -
kafka-src50c topic50c 36 16 100 84 - - -
kafka-src50c topic50c 39 3 100 97 - - -
kafka-src50c topic50c 16 3 100 97 - - -
kafka-src50c topic50c 5 43 100 57 - - -
kafka-src50c topic50c 46 3 100 97 - - -
kafka-src50c topic50c 35 2 100 98 - - -
kafka-src50c topic50c 32 15 100 85 - - -
kafka-src50c topic50c 12 16 100 84 - - -
kafka-src50c topic50c 31 3 100 97 - - -
kafka-src50c topic50c 42 50 100 50 - - -
kafka-src50c topic50c 19 3 100 97 - - -
kafka-src50c topic50c 28 3 100 97 - - -
pod "kafka-consumer-groups" deleted
Interestingly receiving events as seen in eventdisplay log even when rebalancing is happening.
Seeing also new exception
{"@timestamp":"2022-06-09T16:38:46.707Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50c-2, groupId=kafka-src50c] Setting offset for partition topic50c-47 to the committed offset FetchPosition{offset=2, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[my-cluster-kafka-1.my-cluster-kafka-brokers.kafka.svc:9092 (id: 1 rack: null)], epoch=0}}","logger_name":"org.apache.kafka.clients.consumer.internals.ConsumerCoordinator","thread_name":"vert.x-kafka-consumer-thread-1","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-09T16:38:46.707Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50c-2, groupId=kafka-src50c] Setting offset for partition topic50c-45 to the committed offset FetchPosition{offset=16, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc:9092 (id: 0 rack: null)], epoch=0}}","logger_name":"org.apache.kafka.clients.consumer.internals.ConsumerCoordinator","thread_name":"vert.x-kafka-consumer-thread-1","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-09T16:38:46.707Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50c-2, groupId=kafka-src50c] Setting offset for partition topic50c-43 to the committed offset FetchPosition{offset=3, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[my-cluster-kafka-2.my-cluster-kafka-brokers.kafka.svc:9092 (id: 2 rack: null)], epoch=0}}","logger_name":"org.apache.kafka.clients.consumer.internals.ConsumerCoordinator","thread_name":"vert.x-kafka-consumer-thread-1","level":"INFO","level_value":20000}
Jun 09, 2022 4:38:49 PM io.vertx.core.impl.ContextImpl
SEVERE: Unhandled exception
java.lang.IndexOutOfBoundsException: bitIndex < 0: -6
at java.base/java.util.BitSet.set(Unknown Source)
at dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager$OffsetTracker.recordNewOffset(OffsetManager.java:266)
at dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager.commit(OffsetManager.java:150)
at dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager.successfullySentToSubscriber(OffsetManager.java:116)
at dev.knative.eventing.kafka.broker.dispatcher.impl.RecordDispatcherImpl.onSubscriberSuccess(RecordDispatcherImpl.java:246)
at dev.knative.eventing.kafka.broker.dispatcher.impl.RecordDispatcherImpl.lambda$onFilterMatching$1(RecordDispatcherImpl.java:227)
at io.vertx.core.impl.future.FutureImpl$1.onSuccess(FutureImpl.java:91)
at io.vertx.core.impl.future.FutureImpl$ListenerArray.onSuccess(FutureImpl.java:262)
at io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60)
at io.vertx.core.impl.future.FutureImpl.tryComplete(FutureImpl.java:211)
at io.vertx.core.impl.future.Composition$1.onSuccess(Composition.java:62)
at io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60)
at io.vertx.core.impl.future.FutureImpl.addListener(FutureImpl.java:196)
at io.vertx.core.impl.future.Composition.onSuccess(Composition.java:43)
at io.vertx.core.impl.future.FutureImpl$ListenerArray.onSuccess(FutureImpl.java:262)
at io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60)
at io.vertx.core.impl.future.FutureImpl.tryComplete(FutureImpl.java:211)
at io.vertx.core.impl.future.PromiseImpl.tryComplete(PromiseImpl.java:23)
at io.vertx.core.Promise.complete(Promise.java:66)
at io.vertx.circuitbreaker.impl.CircuitBreakerImpl.lambda$null$4(CircuitBreakerImpl.java:232)
at io.vertx.core.impl.AbstractContext.dispatch(AbstractContext.java:100)
at io.vertx.core.impl.AbstractContext.dispatch(AbstractContext.java:63)
at io.vertx.core.impl.EventLoopContext.lambda$runOnContext$0(EventLoopContext.java:38)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Unknown Source)
After rebalancing finished
kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 38 54 100 46 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 27 3 100 97 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 33 49 100 51 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 48 3 100 97 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 26 7 100 93 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 47 2 100 98 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 49 3 100 97 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 43 3 100 97 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 40 3 100 97 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 29 56 100 44 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 41 54 100 46 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 32 19 100 81 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 28 3 100 97 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 45 16 100 84 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 34 3 100 97 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 25 3 100 97 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 30 51 100 49 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 37 3 100 97 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 44 20 100 80 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 36 16 100 84 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 39 3 100 97 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 46 3 100 97 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 35 6 100 94 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 31 3 100 97 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 42 50 100 50 consumer-kafka-src50c-2-21528018-f06c-4b40-a6e3-b2008a05dddc /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 14 2 100 98 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 11 16 100 84 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 0 48 100 52 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 7 9 100 91 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 1 9 100 91 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 8 50 100 50 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 3 16 100 84 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 2 3 100 97 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 13 9 100 91 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 23 16 100 84 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 16 8 100 92 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 5 49 100 51 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 12 16 100 84 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 6 3 100 97 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 19 9 100 91 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 18 3 100 97 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 17 46 100 54 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 4 9 100 91 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 24 35 100 65 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 10 3 100 97 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 9 50 100 50 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 22 3 100 97 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 21 50 100 50 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 15 3 100 97 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 20 37 100 63 consumer-kafka-src50c-1-e62f466c-bf3b-4f3f-ac85-3ff50fd3c403 /172.30.125.135 consumer-kafka-src50c-1
pod "kafka-consumer-groups" deleted
Short summary: it took about 2 hours (started about 2022-06-09T15:57 UTC finished about 18:00) to process to send 5000 events. If all 50 partitions were used with 50 outbound connections it would be about 20 minutes (with delay 10 seconds it is about 5 events/second throughput so 1000 seconds about 16 min + some time for re-balancing and initial scaling). All events were received but almost 80% were duplicates:
kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
source kafka-src50 received: 5000 messages, 3959 are dups and 0 non 200s.
I have uploaded logs into this git repo: https://github.com/aslom/reproduce-kafka-source-results/tree/main/duplicates1-run1
@steven0711dong @aavarghese do you see similiar results?
All events were processed:
kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 27 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 26 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 20 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 23 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 29 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 32 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 31 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 18 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 17 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 24 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 33 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 25 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 30 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 22 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 21 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 19 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 28 100 100 0 consumer-kafka-src50c-1-f81af19b-9ef7-468d-bd2c-1d85b5b0ace3 /172.30.196.142 consumer-kafka-src50c-1
kafka-src50c topic50c 4 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 14 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 11 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 0 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 7 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 1 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 8 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 3 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 2 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 15 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 13 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 16 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 5 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 12 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 6 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 10 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 9 100 100 0 consumer-kafka-src50c-1-4aab760a-5f7d-418f-9789-aace1a0a4ecb /172.30.125.135 consumer-kafka-src50c-1
kafka-src50c topic50c 38 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 48 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 47 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 49 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 43 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 40 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 41 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 42 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 45 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 34 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 37 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 44 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 36 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 39 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 46 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
kafka-src50c topic50c 35 100 100 0 consumer-kafka-src50c-2-5b4a7d95-7fef-4ae3-81d7-8311ca9882bf /172.30.38.241 consumer-kafka-src50c-2
pod "kafka-consumer-groups" deleted
Can we monitor what the scheduler is doing? why are all these rebalance happening? can we run the same test using the old source? any difference between them?
I scaled up twice there were two rebalances - started test about 11:55am finished about 2pm
2022-06-09 12:02 k scale --replicas=25 kafkasources/kafka-src50
2022-06-09 12:33 k scale --replicas=50 kafkasources/kafka-src50
I should be able to install and run old golang source so we have data to compare.
@aslom @pierDipi Scheduler logs (plus autoscaler) are all in the controller logs. Grepping for these two statements (especially the second statement scheduling successful
shows the new placements) will give you an idea of how many times the scheduler recalculated placements during the run:
{"level":"info","ts":"2022-06-09T17:19:26.198Z","logger":"kafka-broker-controller","caller":"statefulset/scheduler.go:229","msg":"scaling up with a rebalance (if needed)","knative.dev/pod":"kafka-controller-6fbb58486c-kltql","key":"default/4df1fb03-68c4-4ba8-a6aa-aeb00db80138","vreplicas":0,"new vreplicas":25}
{"level":"info","ts":"2022-06-09T17:19:26.643Z","logger":"kafka-broker-controller","caller":"statefulset/scheduler.go:255","msg":"scheduling successful","knative.dev/pod":"kafka-controller-6fbb58486c-kltql","key":"default/4df1fb03-68c4-4ba8-a6aa-aeb00db80138","placement":[{"podName":"kafka-source-dispatcher-0","vreplicas":9},{"podName":"kafka-source-dispatcher-1","vreplicas":8},{"podName":"kafka-source-dispatcher-2","vreplicas":8}]}
It looks reasonable and what I expected - anything else to look related to Kafka rebalancing?
grep 'scheduling successful' duplicates1-run1/*.txt
duplicates1-run1/kafka-source-controller.txt:{"level":"info","ts":"2022-06-09T15:59:52.278Z","logger":"kafka-source-controller","caller":"statefulset/scheduler.go:255","msg":"scheduling successful","knative.dev/pod":"kafka-source-controller-9598b4786-p89pw","key":"k9/54d73909-2b21-41f8-9238-0e1603d2ee09","placement":[{"podName":"kafka-source-dispatcher-0","vreplicas":1}]}
duplicates1-run1/kafka-source-controller.txt:{"level":"info","ts":"2022-06-09T16:02:35.320Z","logger":"kafka-source-controller","caller":"statefulset/scheduler.go:255","msg":"scheduling successful","knative.dev/pod":"kafka-source-controller-9598b4786-p89pw","key":"k9/54d73909-2b21-41f8-9238-0e1603d2ee09","placement":[{"podName":"kafka-source-dispatcher-0","vreplicas":13},{"podName":"kafka-source-dispatcher-1","vreplicas":12}]}
duplicates1-run1/kafka-source-controller.txt:{"level":"info","ts":"2022-06-09T16:33:16.221Z","logger":"kafka-source-controller","caller":"statefulset/scheduler.go:255","msg":"scheduling successful","knative.dev/pod":"kafka-source-controller-9598b4786-p89pw","key":"k9/54d73909-2b21-41f8-9238-0e1603d2ee09","placement":[{"podName":"kafka-source-dispatcher-0","vreplicas":15},{"podName":"kafka-source-dispatcher-2","vreplicas":15},{"podName":"kafka-source-dispatcher-1","vreplicas":20}]}
grep 'scaling up with a rebalance' duplicates1-run1/*
duplicates1-run1/kafka-source-controller.txt:{"level":"info","ts":"2022-06-09T15:59:52.249Z","logger":"kafka-source-controller","caller":"statefulset/scheduler.go:229","msg":"scaling up with a rebalance (if needed)","knative.dev/pod":"kafka-source-controller-9598b4786-p89pw","key":"k9/54d73909-2b21-41f8-9238-0e1603d2ee09","vreplicas":0,"new vreplicas":1}
duplicates1-run1/kafka-source-controller.txt:{"level":"info","ts":"2022-06-09T16:02:32.986Z","logger":"kafka-source-controller","caller":"statefulset/scheduler.go:229","msg":"scaling up with a rebalance (if needed)","knative.dev/pod":"kafka-source-controller-9598b4786-p89pw","key":"k9/54d73909-2b21-41f8-9238-0e1603d2ee09","vreplicas":1,"new vreplicas":25}
duplicates1-run1/kafka-source-controller.txt:{"level":"info","ts":"2022-06-09T16:02:34.204Z","logger":"kafka-source-controller","caller":"statefulset/scheduler.go:229","msg":"scaling up with a rebalance (if needed)","knative.dev/pod":"kafka-source-controller-9598b4786-p89pw","key":"k9/54d73909-2b21-41f8-9238-0e1603d2ee09","vreplicas":1,"new vreplicas":25}
duplicates1-run1/kafka-source-controller.txt:{"level":"info","ts":"2022-06-09T16:33:04.147Z","logger":"kafka-source-controller","caller":"statefulset/scheduler.go:229","msg":"scaling up with a rebalance (if needed)","knative.dev/pod":"kafka-source-controller-9598b4786-p89pw","key":"k9/54d73909-2b21-41f8-9238-0e1603d2ee09","vreplicas":25,"new vreplicas":50}
duplicates1-run1/kafka-source-controller.txt:{"level":"info","ts":"2022-06-09T16:33:05.877Z","logger":"kafka-source-controller","caller":"statefulset/scheduler.go:229","msg":"scaling up with a rebalance (if needed)","knative.dev/pod":"kafka-source-controller-9598b4786-p89pw","key":"k9/54d73909-2b21-41f8-9238-0e1603d2ee09","vreplicas":25,"new vreplicas":50}
duplicates1-run1/kafka-source-controller.txt:{"level":"info","ts":"2022-06-09T16:33:11.557Z","logger":"kafka-source-controller","caller":"statefulset/scheduler.go:229","msg":"scaling up with a rebalance (if needed)","knative.dev/pod":"kafka-source-controller-9598b4786-p89pw","key":"k9/54d73909-2b21-41f8-9238-0e1603d2ee09","vreplicas":25,"new vreplicas":50}
Described how reproduce results I was getting: https://github.com/aslom/reproduce-kafka-source-results/blob/main/duplicates1-run1/README.md
Doing new run based on Ansu setup https://github.com/aslom/reproduce-kafka-source-results/blob/main/duplicates1-run2/README.md
For record here is lag-offsets I see for partitions during test
kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 38 191 200 9 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 33 114 200 86 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 9 116 200 84 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 48 113 200 87 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 47 113 200 87 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 49 117 200 83 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 40 123 200 77 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 23 114 200 86 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 41 114 200 86 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 12 142 200 58 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 6 142 200 58 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 19 114 200 86 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 45 113 200 87 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 34 117 200 83 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 3 115 200 85 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 37 117 200 83 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 44 143 200 57 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 15 114 200 86 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 43 113 200 87 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 36 113 200 87 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 39 114 200 86 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 46 113 200 87 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 35 113 200 87 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 31 113 200 87 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 42 114 200 86 consumer-kafka-src50c-1-957cd55a-70e5-4bce-800b-9a7a5576d91e /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 27 115 200 85 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 14 145 200 55 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 11 115 200 85 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 0 144 200 56 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 7 115 200 85 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 8 144 200 56 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 2 144 200 56 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 20 145 200 55 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 13 114 200 86 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 16 144 200 56 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 29 118 200 82 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 5 116 200 84 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 32 134 200 66 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 28 191 200 9 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 18 143 200 57 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 17 115 200 85 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 4 143 200 57 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 24 133 200 67 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 10 144 200 56 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 25 125 200 75 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 30 133 200 67 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 22 134 200 66 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 21 116 200 84 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 1 115 200 85 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 26 132 200 68 consumer-kafka-src50c-1-303fced4-6f15-4403-bbe3-54675d657ca0 /172.30.196.183 consumer-kafka-src50c-1
pod "kafka-consumer-groups" deleted
Anybody else seeing - I see lot of those all time ...
"@timestamp":"2022-06-14T15:59:11.165Z","@version":"1","message":"[Consumer clientId=consumer-kafka-src50c-1, groupId=kafka-src50c] Failing OffsetCommit request since the consumer is not part of an active group","logger_name":"org.apache.kafka.clients.consumer.internals.ConsumerCoordinator","thread_name":"vert.x-kafka-consumer-thread-0","level":"INFO","level_value":20000}
{"@timestamp":"2022-06-14T15:59:11.165Z","@version":"1","message":"Consumer exception","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.BaseConsumerVerticle","thread_name":"vert.x-kafka-consumer-thread-0","level":"ERROR","level_value":40000,"stack_trace":"org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1139)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1004)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1490)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1438)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$commit$29(KafkaReadStreamImpl.java:611)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$submitTaskWhenStarted$3(KafkaReadStreamImpl.java:135)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n"}
{"@timestamp":"2022-06-14T15:59:11.165Z","@version":"1","message":"Failed to commit topic partition topicPartition=TopicPartition{topic=topic50c, partition=16} offset offset=165","logger_name":"dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager","thread_name":"vert.x-eventloop-thread-2","level":"ERROR","level_value":40000,"stack_trace":"org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1139)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1004)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1490)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1438)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$commit$29(KafkaReadStreamImpl.java:611)\n\tat io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$submitTaskWhenStarted$3(KafkaReadStreamImpl.java:135)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n","topicPartition":{"topic":"topic50c","partition":16},"offset":165}
When starting test
aslom@m:~/Documents/awsm/go/src/github.com/aslom/repdroduce-kafka-source|main ⇒ kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
source kafka-src50 received: 507 messages, 0 are dups and 0 non 200s.
pod "curl" deleted
aslom@m:~/Documents/awsm/go/src/github.com/aslom/repdroduce-kafka-source|main ⇒ kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
source kafka-src50 received: 955 messages, 3 are dups and 0 non 200s.
pod "curl" deleted
kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 18 109 200 91 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 14 111 200 89 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 0 110 200 90 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 22 110 200 90 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 26 109 200 91 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 8 110 200 90 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 2 110 200 90 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 20 111 200 89 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 16 110 200 90 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 6 109 200 91 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 28 109 200 91 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 4 109 200 91 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 24 109 200 91 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 10 110 200 90 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 30 109 200 91 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 32 110 200 90 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 12 109 200 91 consumer-kafka-src50c-1-7913a0ab-be93-4636-b13d-86edf263b560 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 17 109 200 91 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 27 109 200 91 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 33 109 200 91 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 11 109 200 91 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 7 109 200 91 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 21 110 200 90 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 1 109 200 91 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 3 110 200 90 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 15 109 200 91 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 13 108 200 92 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 23 109 200 91 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 29 109 200 91 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 5 110 200 90 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 31 108 200 92 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 19 109 200 91 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 9 111 200 89 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 25 109 200 91 consumer-kafka-src50c-1-8dfd3d83-b2c5-4213-beb9-d1ed9a896304 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 38 109 200 91 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 48 109 200 91 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 47 109 200 91 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 49 112 200 88 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 43 109 200 91 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 40 112 200 88 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 41 109 200 91 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 42 109 200 91 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 45 109 200 91 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 34 112 200 88 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 37 113 200 87 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 44 109 200 91 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 36 109 200 91 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 39 109 200 91 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 46 109 200 91 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 35 109 200 91 consumer-kafka-src50c-2-fb2fc654-e003-4eac-835d-1b857a89bbef /172.30.38.201 consumer-kafka-src50c-2
pod "kafka-consumer-groups" deleted
Then I did scaling
TZ=Z date
Tue Jun 14 15:28:20 UTC 2022
aslom@m:~/Documents/awsm/go/src/github.com/aslom/repdroduce-kafka-source|main ⇒ k scale --replicas=50 kafkasources/kafka-src50
kafkasource.sources.knative.dev/kafka-src50 scaled
aslom@m:~/Documents/awsm/go/src/github.com/aslom/repdroduce-kafka-source|main ⇒ k -n knative-eventing get po|grep kafka-source
kafka-source-controller-5bf84b874-p5hjs 1/1 Running 0 21m
kafka-source-dispatcher-0 1/1 Running 0 20m
kafka-source-dispatcher-1 1/1 Running 0 21m
kafka-source-dispatcher-2 1/1 Running 0 21m
Towards the end of test
aslom@m:~/Documents/awsm/go/src/github.com/aslom/repdroduce-kafka-source|main ⇒ TZ=Z date
Tue Jun 14 16:26:04 UTC 2022
aslom@m:~/Documents/awsm/go/src/github.com/aslom/repdroduce-kafka-source|main ⇒ kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
source kafka-src50 received: 3408 messages, 954 are dups and 0 non 200s.
pod "curl" deleted
kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 27 153 200 47 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 14 194 200 6 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 11 165 200 35 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 7 166 200 34 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 8 194 200 6 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 2 194 200 6 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 20 195 200 5 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 13 165 200 35 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 16 194 200 6 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 29 156 200 44 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 5 166 200 34 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 32 172 200 28 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 31 149 200 51 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 28 200 200 0 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 18 194 200 6 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 17 165 200 35 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 4 194 200 6 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 24 171 200 29 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 10 195 200 5 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 25 163 200 37 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 30 171 200 29 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 22 184 200 16 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 21 167 200 33 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 1 166 200 34 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 26 171 200 29 consumer-kafka-src50c-2-9e126566-142d-42ca-8ad8-2393802810b9 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 38 200 200 0 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 33 146 200 54 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 0 191 200 9 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 9 149 200 51 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 48 136 200 64 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 47 146 200 54 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 49 139 200 61 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 40 200 200 0 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 23 146 200 54 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 41 184 200 16 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 12 174 200 26 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 6 174 200 26 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 42 137 200 63 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 19 147 200 53 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 45 136 200 64 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 34 145 200 55 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 3 148 200 52 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 37 142 200 58 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 44 200 200 0 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 15 147 200 53 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 43 136 200 64 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 36 137 200 63 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 39 138 200 62 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 46 136 200 64 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 35 136 200 64 consumer-kafka-src50c-1-b9642969-7e35-45c9-b627-0563f4b49ece /172.30.196.183 consumer-kafka-src50c-1
pod "kafka-consumer-groups" deleted
aslom@m:~/Documents/awsm/go/src/github.ibm.com/juno/kafka_2.13-3.1.0|⇒
Towards end seems imbalance increased and got lot of duplicates:
TZ=Z date; kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
Tue Jun 14 16:49:44 UTC 2022
source kafka-src50 received: 4227 messages, 1297 are dups and 0 non 200s.
pod "curl" deleted
TZ=Z date;kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
Tue Jun 14 16:50:38 UTC 2022
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 17 195 200 5 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 27 183 200 17 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 33 175 200 25 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 11 195 200 5 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 7 196 200 4 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 21 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 1 195 200 5 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 3 177 200 23 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 15 176 200 24 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 13 194 200 6 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 23 175 200 25 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 29 186 200 14 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 5 196 200 4 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 31 179 200 21 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 19 176 200 24 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 9 178 200 22 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 25 193 200 7 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 18 195 200 5 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 14 197 200 3 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 0 191 200 9 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 22 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 26 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 8 197 200 3 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 2 197 200 3 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 20 198 200 2 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 16 195 200 5 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 6 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 28 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 4 194 200 6 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 24 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 10 195 200 5 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 30 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 32 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 12 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 38 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 48 150 200 50 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 47 159 200 41 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 49 153 200 47 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 43 150 200 50 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 40 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 41 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 42 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 45 150 200 50 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 34 159 200 41 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 37 156 200 44 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 44 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 36 151 200 49 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 39 167 200 33 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 46 150 200 50 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 35 163 200 37 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
pod "kafka-consumer-groups" deleted
Towards very end
TZ=Z date; kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
Tue Jun 14 17:00:23 UTC 2022
source kafka-src50 received: 4803 messages, 1297 are dups and 0 non 200s.
pod "curl" deleted
TZ=Z date;kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
Tue Jun 14 17:00:31 UTC 2022
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 17 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 27 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 33 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 11 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 7 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 21 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 1 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 3 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 15 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 13 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 23 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 29 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 5 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 31 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 19 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 9 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 25 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 18 195 200 5 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 14 197 200 3 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 0 191 200 9 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 22 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 26 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 8 197 200 3 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 2 197 200 3 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 20 198 200 2 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 16 195 200 5 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 6 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 28 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 4 194 200 6 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 24 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 10 195 200 5 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 30 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 32 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 12 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 38 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 48 177 200 23 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 47 187 200 13 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 49 180 200 20 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 43 177 200 23 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 40 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 41 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 42 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 45 177 200 23 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 34 186 200 14 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 37 183 200 17 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 44 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 36 178 200 22 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 39 195 200 5 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 46 177 200 23 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 35 191 200 9 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
pod "kafka-consumer-groups" deleted
Finished duplicates1-run2. Interestingly there is still small LAG but all events were delivered:
TZ=Z date; kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
Tue Jun 14 17:11:27 UTC 2022
source kafka-src50 received: 5000 messages, 1297 are dups and 0 non 200s.
pod "curl" deleted
TZ=Z date;kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
Tue Jun 14 17:12:37 UTC 2022
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 17 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 27 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 33 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 11 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 7 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 21 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 1 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 3 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 15 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 13 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 23 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 29 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 5 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 31 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 19 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 9 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 25 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 18 195 200 5 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 14 197 200 3 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 0 191 200 9 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 22 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 26 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 8 197 200 3 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 2 197 200 3 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 20 198 200 2 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 16 195 200 5 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 6 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 28 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 4 194 200 6 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 24 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 10 195 200 5 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 30 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 32 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 12 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 38 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 48 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 47 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 49 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 43 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 40 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 41 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 42 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 45 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 34 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 37 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 44 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 36 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 39 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 46 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 35 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
pod "kafka-consumer-groups" deleted
Logs: https://github.com/aslom/reproduce-kafka-source-results/tree/main/duplicates1-run2
10 minutes later LAGs were still there
TZ=Z date; kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
Tue Jun 14 17:21:12 UTC 2022
source kafka-src50 received: 5000 messages, 1297 are dups and 0 non 200s.
pod "curl" deleted
TZ=Z date;kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
Tue Jun 14 17:21:31 UTC 2022
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 17 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 27 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 33 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 11 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 7 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 21 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 1 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 3 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 15 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 13 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 23 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 29 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 5 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 31 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 19 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 9 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 25 200 200 0 consumer-kafka-src50c-2-eab109aa-0fa5-48df-a6e1-552435d329f5 /172.30.38.201 consumer-kafka-src50c-2
kafka-src50c topic50c 18 195 200 5 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 14 197 200 3 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 0 191 200 9 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 22 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 26 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 8 197 200 3 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 2 197 200 3 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 20 198 200 2 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 16 195 200 5 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 6 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 28 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 4 194 200 6 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 24 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 10 195 200 5 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 30 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 32 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 12 200 200 0 consumer-kafka-src50c-1-7126fc77-f672-46e4-8d89-930b18c53199 /172.30.125.159 consumer-kafka-src50c-1
kafka-src50c topic50c 38 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 48 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 47 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 49 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 43 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 40 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 41 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 42 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 45 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 34 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 37 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 44 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 36 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 39 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 46 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
kafka-src50c topic50c 35 200 200 0 consumer-kafka-src50c-1-57f55b30-2141-4d45-8a4e-dd6e4c522314 /172.30.196.183 consumer-kafka-src50c-1
pod "kafka-consumer-groups" deleted
I have re-run tests with updated config and setting sink delay to 1 second (sink sleep for one second before replying). I ran tests twice each time sending 5000 messages and there no duplicates:
kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
source kafka-src50 received: 10000 messages, 0 are dups and 0 non 200s.
pod "curl" deleted
The test took about 2 minutes. There was no scaling so no re-balancing.
Logs and details of configuration (rate-limiting, max.partition.fetch.bytes, auto.commit.interval.ms) used in https://github.com/aslom/reproduce-kafka-source-results/tree/main/duplicates2-run1
Partition were consumed fast:
kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 27 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 26 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 20 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 23 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 29 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 32 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 31 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 18 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 17 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 24 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 33 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 25 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 30 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 22 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 21 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 19 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 28 400 400 0 consumer-kafka-src50c-1-5b6aa4db-5d2f-4bf6-b77c-ced411034a2b /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 4 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 14 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 11 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 0 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 7 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 1 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 8 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 3 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 2 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 15 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 13 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 16 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 5 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 12 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 6 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 10 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 9 400 400 0 consumer-kafka-src50c-1-4b777dda-78c8-46e3-a126-a3021da3e823 /172.30.38.233 consumer-kafka-src50c-1
kafka-src50c topic50c 38 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 48 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 47 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 49 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 43 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 40 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 41 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 42 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 45 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 34 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 37 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 44 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 36 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 39 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 46 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 35 400 400 0 consumer-kafka-src50c-2-beb98e64-89b2-44b4-a73f-306e9f832162 /172.30.125.163 consumer-kafka-src50c-2
pod "kafka-consumer-groups" deleted
I re-rerun original test with sink delay 10 seconds and got:
source kafka-src50 received: 5000 messages, 1155 are dups and 0 non 200s.
It took 42 minutes to send 5000 messages with sink delay 10 seconds and 50 partitions.
I did two scaling up. First about 1 minute into test:
k scale --replicas=25 kafkasources/kafka-src50
And then few minutes later scaled again:
k scale --replicas=50 kafkasources/kafka-src50
Logs are in https://github.com/aslom/reproduce-kafka-source-results/tree/main/duplicates1-run3
Here is what I saw as far receiving messages after scaling:
TZ=Z date; k scale --replicas=50 kafkasources/kafka-src50
Thu Jul 21 18:07:05 UTC 2022
kafkasource.sources.knative.dev/kafka-src50 scaled
TZ=Z date; kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
Thu Jul 21 18:07:15 UTC 2022
source kafka-src50 received: 1186 messages, 0 are dups and 0 non 200s.
pod "curl" deleted
TZ=Z date; kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
Thu Jul 21 18:07:38 UTC 2022
source kafka-src50 received: 1216 messages, 0 are dups and 0 non 200s.
pod "curl" deleted
TZ=Z date; kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
Thu Jul 21 18:09:43 UTC 2022
source kafka-src50 received: 1452 messages, 50 are dups and 0 non 200s.
pod "curl" deleted
Z=Z date; kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
Thu Jul 21 18:19:23 UTC 2022
source kafka-src50 received: 2906 messages, 246 are dups and 0 non 200s.
pod "curl" deleted
TZ=Z date; kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
Thu Jul 21 18:28:15 UTC 2022
source kafka-src50 received: 4049 messages, 868 are dups and 0 non 200s.
pod "curl" deleted
TZ=Z date; kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
Thu Jul 21 18:29:37 UTC 2022
source kafka-src50 received: 4190 messages, 951 are dups and 0 non 200s.
pod "curl" deleted
TZ=Z date; kubectl run curl --image=curlimages/curl --rm=true --restart=Never -ti -- http://kafkascraper/stats
Thu Jul 21 18:31:01 UTC 2022
source kafka-src50 received: 4344 messages, 1049 are dups and 0 non 200s.
pod "curl" deleted
And I observed that partitions were not consumed with the same speed - middle of test:
Z=Z date;kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
Thu Jul 21 18:28:09 UTC 2022
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 38 493 500 7 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 27 442 500 58 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 33 462 500 38 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 48 484 500 16 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 26 442 500 58 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 47 475 500 25 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 49 476 500 24 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 43 475 500 25 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 40 496 500 4 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 29 443 500 57 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 41 480 500 20 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 32 465 500 35 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 28 473 500 27 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 45 480 500 20 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 34 493 500 7 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 25 465 500 35 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 30 483 500 17 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 37 475 500 25 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 44 478 500 22 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 36 497 500 3 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 39 478 500 22 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 46 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 35 477 500 23 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 31 443 500 57 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 42 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 14 449 500 51 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 11 450 500 50 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 0 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 7 450 500 50 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 1 451 500 49 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 8 451 500 49 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 3 451 500 49 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 2 452 500 48 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 13 452 500 48 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 23 482 500 18 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 16 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 5 450 500 50 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 12 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 6 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 19 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 18 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 17 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 4 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 24 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 10 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 9 471 500 29 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 22 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 21 453 500 47 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 15 451 500 49 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 20 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
pod "kafka-consumer-groups" deleted
toward end of test:
TZ=Z date;kubectl -n kafka run kafka-consumer-groups -ti --image=quay.io/strimzi/kafka:0.28.0-kafka-3.1.0 --rm=true --restart=Never -- bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --describe --group kafka-src50c
Thu Jul 21 18:38:31 UTC 2022
If you don't see a command prompt, try pressing enter.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
kafka-src50c topic50c 18 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 27 498 500 2 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 45 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 0 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 48 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 21 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 3 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 15 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 12 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 6 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 42 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 24 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 33 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 9 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 30 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 36 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 39 500 500 0 consumer-kafka-src50c-1-206d6da4-f71d-46cd-ac06-a902ab9dcef2 /172.30.9.36 consumer-kafka-src50c-1
kafka-src50c topic50c 4 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 34 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 7 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 22 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 1 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 37 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 49 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 43 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 40 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 13 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 16 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 46 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 31 491 500 9 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 10 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 25 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 19 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 28 500 500 0 consumer-kafka-src50c-1-84c305dc-9a25-41d4-9ae3-fb82ae12e525 /172.30.196.189 consumer-kafka-src50c-1
kafka-src50c topic50c 14 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 11 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 26 496 500 4 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 44 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 2 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 20 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 23 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 29 498 500 2 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 41 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 32 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 38 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 17 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 8 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 47 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 5 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
kafka-src50c topic50c 35 500 500 0 consumer-kafka-src50c-2-7d84ae7d-51d0-468e-a308-47c4629f6185 /172.30.125.163 consumer-kafka-src50c-2
pod "kafka-consumer-groups" deleted
Looking into logs I saw that exception java.lang.IndexOutOfBoundsException: bitIndex < 0: -1\n\tat java.base/java.util.BitSet.set(BitSet.java:447)
{"@timestamp":"2022-07-21T18:32:38.614Z","@version":"1","message":"Unhandled exception","logger_name":"io.vertx.core.impl.ContextImpl","thread_name":"vert.x-worker-thread-17","level":"ERROR","level_value":40000,"stack_trace":"java.lang.IndexOutOfBoundsException: bitIndex < 0: -1\n\tat java.base/java.util.BitSet.set(BitSet.java:447)\n\tat dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager$OffsetTracker.recordNewOffset(OffsetManager.java:267)\n\tat dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager.commit(OffsetManager.java:151)\n\tat dev.knative.eventing.kafka.broker.dispatcher.impl.consumer.OffsetManager.successfullySentToSubscriber(OffsetManager.java:117)\n\tat dev.knative.eventing.kafka.broker.dispatcher.impl.RecordDispatcherImpl.onSubscriberSuccess(RecordDispatcherImpl.java:249)\n\tat dev.knative.eventing.kafka.broker.dispatcher.impl.RecordDispatcherImpl.lambda$onFilterMatching$1(RecordDispatcherImpl.java:230)\n\tat io.vertx.core.impl.future.FutureImpl$1.onSuccess(FutureImpl.java:91)\n\tat io.vertx.core.impl.future.FutureImpl$ListenerArray.onSuccess(FutureImpl.java:262)\n\tat io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60)\n\tat io.vertx.core.impl.future.FutureImpl.tryComplete(FutureImpl.java:211)\n\tat io.vertx.core.impl.future.Composition$1.onSuccess(Composition.java:62)\n\tat io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60)\n\tat io.vertx.core.impl.future.FutureImpl.addListener(FutureImpl.java:196)\n\tat io.vertx.core.impl.future.Composition.onSuccess(Composition.java:43)\n\tat io.vertx.core.impl.future.FutureImpl$ListenerArray.onSuccess(FutureImpl.java:262)\n\tat io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60)\n\tat io.vertx.core.impl.future.FutureImpl.tryComplete(FutureImpl.java:211)\n\tat io.vertx.core.impl.future.Composition$1.onSuccess(Composition.java:62)\n\tat io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60)\n\tat io.vertx.core.impl.future.SucceededFuture.addListener(SucceededFuture.java:88)\n\tat io.vertx.core.impl.future.Composition.onSuccess(Composition.java:43)\n\tat io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60)\n\tat io.vertx.core.impl.future.FutureImpl.tryComplete(FutureImpl.java:211)\n\tat io.vertx.core.impl.future.PromiseImpl.tryComplete(PromiseImpl.java:23)\n\tat dev.knative.eventing.kafka.broker.dispatcher.impl.http.WebClientCloudEventSender.lambda$send$5(WebClientCloudEventSender.java:175)\n\tat io.vertx.core.impl.future.FutureImpl$1.onSuccess(FutureImpl.java:91)\n\tat io.vertx.core.impl.future.FutureImpl$ListenerArray.onSuccess(FutureImpl.java:262)\n\tat io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60)\n\tat io.vertx.core.impl.future.FutureImpl.tryComplete(FutureImpl.java:211)\n\tat io.vertx.core.impl.future.PromiseImpl.tryComplete(PromiseImpl.java:23)\n\tat io.vertx.core.impl.future.PromiseImpl.onSuccess(PromiseImpl.java:49)\n\tat io.vertx.core.impl.future.PromiseImpl.handle(PromiseImpl.java:41)\n\tat io.vertx.core.impl.future.PromiseImpl.handle(PromiseImpl.java:23)\n\tat io.vertx.ext.web.client.impl.HttpContext.handleDispatchResponse(HttpContext.java:400)\n\tat io.vertx.ext.web.client.impl.HttpContext.execute(HttpContext.java:387)\n\tat io.vertx.ext.web.client.impl.HttpContext.next(HttpContext.java:365)\n\tat io.vertx.ext.web.client.impl.HttpContext.fire(HttpContext.java:332)\n\tat io.vertx.ext.web.client.impl.HttpContext.dispatchResponse(HttpContext.java:294)\n\tat io.vertx.ext.web.client.impl.HttpContext.lambda$null$8(HttpContext.java:550)\n\tat io.vertx.core.impl.AbstractContext.dispatch(AbstractContext.java:100)\n\tat io.vertx.core.impl.WorkerContext.lambda$run$1(WorkerContext.java:83)\n\tat io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n"}
Other interesting exception Warning' reason: 'InternalError' failed to schedule consumers: node \"\" not found"
(it is in logs)
@aavarghese
{"level":"error","ts":"2022-07-21T18:29:13.920Z","logger":"kafka-source-controller","caller":"controller/controller.go:566","msg":"Reconcile error","knative.dev/pod":"kafka-source-controller-6fd9cf7f57-hq66x","knative.dev/controller":"knative.dev.eventing-kafka-broker.control-plane.pkg.reconciler.consumergroup.Reconciler","knative.dev/kind":"internal.kafka.eventing.knative.dev.ConsumerGroup","knative.dev/traceid":"e91d691a-ada4-4604-834c-c602eafbeb4a","knative.dev/key":"k9/3d743946-d03c-4bdf-91ef-64faa103e1c9","duration":0.072853123,"error":"failed to schedule consumers: node \"\" not found","stacktrace":"knative.dev/pkg/controller.(*Impl).handleErr\n\tknative.dev/pkg@v0.0.0-20220705130606-e60d250dc637/controller/controller.go:566\nknative.dev/pkg/controller.(*Impl).processNextWorkItem\n\tknative.dev/pkg@v0.0.0-20220705130606-e60d250dc637/controller/controller.go:543\nknative.dev/pkg/controller.(*Impl).RunContext.func3\n\tknative.dev/pkg@v0.0.0-20220705130606-e60d250dc637/controller/controller.go:491"}
{"level":"info","ts":"2022-07-21T18:29:13.921Z","logger":"kafka-source-controller.event-broadcaster","caller":"record/event.go:285","msg":"Event(v1.ObjectReference{Kind:\"ConsumerGroup\", Namespace:\"k9\", Name:\"3d743946-d03c-4bdf-91ef-64faa103e1c9\", UID:\"1ed88870-6c34-496c-bc69-13842df7c208\", APIVersion:\"internal.kafka.eventing.knative.dev/v1alpha1\", ResourceVersion:\"958216242\", FieldPath:\"\"}): type: 'Warning' reason: 'InternalError' failed to schedule consumers: node \"\" not found","knative.dev/pod":"kafka-source-controller-6fd9cf7f57-hq66x"}
{"level":"info","ts":"2022-07-21T18:29:57.987Z","logger":"kafka-source-controller","caller":"statefulset/autoscaler.go:122","msg":"error while refreshing scheduler state (will retry){error 26 0 node \"\" not found}","knative.dev/pod":"kafka-source-controller-6fd9cf7f57-hq66x"}
@aavarghese @pierDipi Short summary results so far: when using current code configured with rate-limiting, CooperativestickyAssignor and change max.partition.fetch.bytes and auto.commit (details in https://github.com/aslom/reproduce-kafka-source-results/tree/main/duplicates1-run3):
Compare: one month ago https://github.com/knative-sandbox/eventing-kafka-broker/issues/2240#issuecomment-1151455968 with current results https://github.com/knative-sandbox/eventing-kafka-broker/issues/2240#issuecomment-1191831657
Here https://github.com/knative-sandbox/eventing-kafka-broker/pull/1417 is an actual go test.
if you see the report in https://prow.knative.dev/view/gs/knative-prow/pr-logs/pull/knative-sandbox_eventing-kafka-broker/1417/reconciler-tests_eventing-kafka-broker_main/1549420232664158208, something interesting is that the metric in_flight_requests_count
(recorded by the subscriber) is at least recorded 72 times with a different remote_address
label value, that means that during the entire run there have been at least 72 different connections.
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen
. Mark the issue as
fresh by adding the comment /remove-lifecycle stale
.
Create a service that collects events from sink and make sure you set the maximum and minimum pod count to 1. User stevendongatibm/scraper:2 as the image or build your own by modifying using https://github.com/steven0711dong/KafkaScraper, you can build using ko. If you are using ibmcloud code engine
Here, I'm creating the service as a ksvc but the image should work with regular service as well. Get the eventscollector service address from the above, it will be used when creating sink.
Create the sink by following the instruction from https://github.com/steven0711dong/customizeEventDisplay.git I am using ibmcloud code engine and created the service as a ksvc but it should work with regular k8s service as well.
See the explanation for each env vars in https://github.com/steven0711dong/customizeEventDisplay.git
After you have these 2 services, you can create a Kafka source and reference the sink you created, I recommend to use a non strimzi Kafka instance and use a topic of more than 30 partitions.
When you deploy Kafka source dispatcher, you should tweak the following config variables.
To send events to your Kafka instance topic:
Clone this repo https://github.com/steven0711dong/KafkaProducer and the readme file contains instructions on how to send events