Open ksdvishnukumar opened 10 months ago
Hi @mhowlett & @edenhill ,
I have couple of question.
Once Consume() method calls with either cancellation token or timeout >0 will it start the message pooling by background thread from the broker and buffer it to the local queue? If yes, do we have any option to control it?
With in a infinite while loop, the first call of Consume method either with >0 timeout or CancellationToken and afterwards if I use Consume(TimeSpan.Zero) will it stop the message polling by the background thread?
If I pause the consumer by passing the Pause() method, will it stop the background thread to poll the message from the broker? I am aware that pausing the consumer will purge the local queue.
Description
Consumer is refetching the same message again which causes the Outgoing messages is more in Eventhub metrics. I have a C# Consumer where am pulling the message and storing in to the list. As Confluent Kafka Dotnet does not support Batch Message Processing am handling from my application. Once read all the messages from local queue to the in memory list in application side and process it. To control the frequent the fetch as suggested by @mhowlett , as am consuming the first message using Consume(1000) for further message consume uses with Consume(TimeSpan.Zero).
Before I start the consumer, accumulated 9861 messages in the topic where am trying to consume. BTW I have a topic with 3 partitions.
How to reproduce
Sample Code to reproduce
Checklist
Please provide the following information:
"bootstrap.servers" : "vishnu-1tu-bug-test.servicebus.windows.net:9093", "group.id" : "derotest-bug-23092023-4", "client.id":"derotest-bug-23092023-4638310463137519262", "auto.offset.reset":"earliest" "enable.auto.offset.store": "True" "enable.auto.commit": "False" "auto.commit.interval.ms": "10000" "enable.partition.eof: "True" "connections.max.idle.ms: "180000" "max.partition.fetch.bytes": "1048576" "queued.max.messages.kbytes":"10240" "partition.assignment.strategy": "cooperative-sticky" "isolation.level": "read_uncommitted" "socket.nagle.disable": "True" "socket.keepalive.enable": "True" "metadata.max.age.ms": "180000" "session.timeout.ms": "30000" "max.poll.interval.ms": "300000" "dotnet.cancellation.delay.max.ms": "200" "sasl.username": "$ConnectionString" "sasl.password"
@mhowlett @edenhill Could you please tell me what is happening here.
The metrics from the Eventhub Shows that number of outgoing message is more where i have only 9861 message to consume. From MS Support i got the info that the client is re reading the same message. Am bit not sure how to interpret the actual debug log message.
Edited: Each message is around 4.24 KB in the consume topic.
I see the below lines in the log, Is this causing the issue. If yes how to control this.
9/23/2023, 6:11:59.174 AM kafka-eventhub-test-0.(none) Consumer Log Handler : DEBUG|23-09-2023 06:11:59.1747870|FETCH|derotest-bug-23092023-4638310463137519149#consumer-10|[thrd:sasl_ssl://vishnu-1tu-test.servicebus.windows.net:9093/boot]: sasl_ssl://vishnu-1tu-test.servicebus.windows.net:9093/0: Topic TestTopic [0] in state active at offset 0 (leader epoch 0) (2506/100000 msgs, 10643/10240 kb queued, opv 4) is not fetchable: queued.max.messages.kbytes exceeded
9/23/2023, 6:12:00.078 AM kafka-eventhub-test-0.(none) Consumer Log Handler : DEBUG|23-09-2023 06:12:00.0781550|FETCH|derotest-bug-23092023-4638310463137519262#consumer-13|[thrd:sasl_ssl://vishnu-1tu-test.servicebus.windows.net:9093/boot]: sasl_ssl://vishnu-1tu-test.servicebus.windows.net:9093/0: Topic TestTopic [1] in state active at offset 0 (leader epoch 0) (2522/100000 msgs, 10711/10240 kb queued, opv 4) is not fetchable: queued.max.messages.kbytes exceeded
9/23/2023, 6:12:00.223 AM kafka-eventhub-test-0.(none) Consumer Log Handler : DEBUG|23-09-2023 06:12:00.2238018|FETCH|derotest-bug-23092023-4638310463137519410#consumer-12|[thrd:sasl_ssl://vishnu-1tu-test.servicebus.windows.net:9093/boot]: sasl_ssl://vishnu-1tu-test.servicebus.windows.net:9093/0: Topic TestTopic [2] in state active at offset 0 (leader epoch 0) (2513/100000 msgs, 10672/10240 kb queued, opv 4) is not fetchable: queued.max.messages.kbytes exceeded
9/23/2023, 6:12:08.062 AM kafka-eventhub-test-0.(none) Consumer Log Handler : DEBUG|23-09-2023 06:12:08.0628556|FETCH|derotest-bug-23092023-4638310463137519149#consumer-10|[thrd:sasl_ssl://vishnu-1tu-test.servicebus.windows.net:9093/boot]: sasl_ssl://vishnu-1tu-test.servicebus.windows.net:9093/0: Topic TestTopic [0] in state active at offset 500 (leader epoch 0) (2366/100000 msgs, 10048/10240 kb queued, opv 6) is not fetchable: queued.max.messages.kbytes exceeded
Eventhough i have set QueuedMaxMessagesKbytes as 10240 how it pulled more than threshold. in this case its getting purged?