Closed rodion-fisenko closed 3 years ago
Why not change the log level of o.a.k.c.c.internals.ConsumerCoordinator
to WARN?
@rodion-fisenko Please follow the suggestion from @garyrussell above by changing the log level in ConsumerCoordinator
to WARN
. For more information on this reasoning see this. See also this commit, where a single call to committed()
is now used instead of invoking it per partition. Closing this issue.
We have been trying out version 3.1.4 in order to get rid of this issue, however we still see the same amount of logs being produced. I also noticed that there seems to be an issue where micrometer does not provide up to date metrics when deploying our app with a recent spring-cloud-stream-binder-kafka dependency. Strangely this can be fixed by deploying a second instance to Kubernetes - then those missing metrics suddenly appear all at once.
Hi!
I found an issue due to which produces a lot of redundant logs like this:
ConsumerCoordinator: [Consumer clientId=consumer-progress-pass-4, groupId=progress-pass] Found no committed offset for partition bingoblitz_app_progress_pass_steps_passed-1
After some investigation, I found that the issue can be reproduced in the following conditions:
spring.cloud.stream.kafka.binder.consumer-properties.enable.auto.commit
should befalse
);io.micrometer.core.instrument.MeterRegistry
exists in application context).In this case the logs like described above will be produced every 60 seconds for each partition with no committed offset.
It happens because after merging of this PR (https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/965) some metrics collect every 60 seconds. As I see in the code the class
KafkaBinderMetrics
create new consumer to calculate topic group lag. Offsets for this consumer cannot be auto reset to proper values (0 in this case) like it happens to usual consumers.Like a workaround to fix this issue we can use one of this options:
ERROR
fororg.apache.kafka.clients.consumer.internals.ConsumerCoordinator
logger;spring.cloud.stream.kafka.binder.consumer-properties.enable.auto.commit: true
to enable auto commit. But this option is not preferable, as it can lead to problems.Do you have a plan somehow to rework KafkaBinderMetrics to fix the issue?
UPD I found more suitable solution to avoid this issue.
If we set the value of the property ContainerProperties.assignmentCommitOption to ALWAYS then uncommitted offsets be automatically committed during first partition assignment.
To do it we should provide the following bean: