Our service is using extension quarkus-kafka-streams@3.8.5 to handle kafka messages. Everything was working fine until we added some new logic which took some more time for each message. The service didn't receive or consume any kafka messages.
We found this log before the service stopped working:
2024-11-15T10:55:30.336+0000 WARN [or.ap.ka.cl.co.in.ConsumerCoordinator] (kafka-coordinator-heartbeat-thread | ...) [Consumer clientId=d48eb33d-243e-429a-9036-a990faf0da9c-StreamThread-2-consumer, groupId=...] consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
But actually the service was not down, it just stopped consuming messages.
I've search for this error and the results said that when this issue consumer poll timeout has expired happens, the consumer would be kicked out of consumer group. Is that true? We can we deal with it?
The log level WARN is also a little bit confusing.
Any help or suggestion is appreciated, thanks in advance.
Expected behavior
No response
Actual behavior
No response
How to Reproduce?
No response
Output of uname -a or ver
No response
Output of java -version
No response
Quarkus version or git rev
No response
Build tool (ie. output of mvnw --version or gradlew --version)
Describe the bug
Our service is using extension quarkus-kafka-streams@3.8.5 to handle kafka messages. Everything was working fine until we added some new logic which took some more time for each message. The service didn't receive or consume any kafka messages. We found this log before the service stopped working:
2024-11-15T10:55:30.336+0000 WARN [or.ap.ka.cl.co.in.ConsumerCoordinator] (kafka-coordinator-heartbeat-thread | ...) [Consumer clientId=d48eb33d-243e-429a-9036-a990faf0da9c-StreamThread-2-consumer, groupId=...] consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
But actually the service was not down, it just stopped consuming messages. I've search for this error and the results said that when this issue consumer poll timeout has expired happens, the consumer would be kicked out of consumer group. Is that true? We can we deal with it? The log level WARN is also a little bit confusing.
Any help or suggestion is appreciated, thanks in advance.
Expected behavior
No response
Actual behavior
No response
How to Reproduce?
No response
Output of
uname -a
orver
No response
Output of
java -version
No response
Quarkus version or git rev
No response
Build tool (ie. output of
mvnw --version
orgradlew --version
)No response
Additional information
No response