kafbat / kafka-ui

Open-Source Web UI for managing Apache Kafka clusters
http://ui.docs.kafbat.io
Apache License 2.0
596 stars 79 forks source link

Unable to set up default serde #597

Closed akshatraika-moment closed 4 weeks ago

akshatraika-moment commented 4 weeks ago

Issue submitter TODO list

Describe the bug (actual behavior)

Hi, I am trying to set up Kafbat with a confluent schema registry where every topic has it's schemas set in Schema Registry. Here is my docker compose set up -

  # This is a loose equivalent of the web UI offered by Confluent Cloud which manages our
  # Kafka cluster for our cloud infrastructure.
  kafka-ui:
    image: ghcr.io/kafbat/kafka-ui:latest
    ports:
      - "9090:8080"
    environment:
      - KAFKA_CLUSTERS_0_NAME=local
      - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka-broker-0:9091,kafka-broker-1:9092,kafka-broker-2:9093
      - KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
      - KAFKA_CLUSTERS_0_SCHEMAREGISTRY=http://schema-registry:8081
      - KAFKA_CLUSTERS_0_DEFAULTKEYSERDE=String
      - KAFKA_CLUSTERS_0_DEFAULTVALUESERDE=SchemaRegistry
      - KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME=local
      - KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS=http://kafka-connect:8083
    networks:
      - mylocalnetwork
    depends_on:
      kafka-broker-1:
        condition: service_healthy

      kafka-broker:
        condition: service_healthy
      kafka-broker-2:
        condition: service_healthy

Even after specifying the above, I see two issues:

Note: We use protobuf in our schemas in the confluent schema registry -

schema-registry: image: confluentinc/cp-schema-registry:7.5.2 container_name: schema-registry depends_on: kafka-broker-1: condition: service_healthy kafka-broker: condition: service_healthy kafka-broker-2: condition: service_healthy networks: mylocalnetwork: ipv4_address: 172.16.238.123 ports:

Expected behavior

I was hoping to get Default values for Key to be String and Value to be Schema Registry in both, the topic and produce window as per the docs - https://ui.docs.kafbat.io/configuration/serialization-serde#setting-serdes-for-specific-topics

Your installation details

Local docker set up. Kafka brokers - confluentinc/cp-kafka:7.3.0

Steps to reproduce

You can build the docker compose and run in.

Screenshots

Screenshot 2024-10-09 at 12 24 06 PM

Logs

Standard Commons Logging discovery in action with spring-jcl: please remove commons-logging.jar from classpath in order to avoid potential conflicts default THREAD FACTORY


| | | | | / _| /\ | |_ _ | |/ /_ / _| |___ | || || | | / | '| / | ' / ` / | ' \/ -) | ' </ | _| / / _| _/|__| || _|| // _| ._,_|||\| ||__,|| |__,| |_|

2024-10-09 16:18:38,894 INFO [main] i.k.u.KafkaUiApplication: Starting KafkaUiApplication using Java 17.0.10 with PID 1 (/api.jar started by kafkaui in /) 2024-10-09 16:18:38,909 DEBUG [main] i.k.u.KafkaUiApplication: Running with Spring Boot v3.1.9, Spring v6.0.17 2024-10-09 16:18:38,914 INFO [main] i.k.u.KafkaUiApplication: No active profile set, falling back to 1 default profile: "default" 2024-10-09 16:18:51,397 DEBUG [main] i.k.u.s.SerdesInitializer: Configuring serdes for cluster local 2024-10-09 16:18:51,608 INFO [main] i.c.k.s.KafkaAvroDeserializerConfig: KafkaAvroDeserializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.use.logical.type.converters = true basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.cache.expiry.buffer.seconds = 300 bearer.auth.client.id = null bearer.auth.client.secret = null bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.custom.provider.class = null bearer.auth.identity.pool.id = null bearer.auth.issuer.endpoint.url = null bearer.auth.logical.cluster = null bearer.auth.scope = null bearer.auth.scope.claim.name = scope bearer.auth.sub.claim.name = sub bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy http.connect.timeout.ms = 60000 http.read.timeout.ms = 60000 id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.cache.size = 1000 latest.cache.ttl.sec = -1 latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 rule.actions = [] rule.executors = [] rule.service.loader.enable = true schema.format = null schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [wontbeused] specific.avro.key.type = null specific.avro.reader = false specific.avro.value.type = null use.latest.version = false use.latest.with.metadata = null use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy

2024-10-09 16:18:53,052 INFO [main] o.s.b.a.s.r.ReactiveUserDetailsServiceAutoConfiguration:

Using generated security password: 9ea148fe-8677-4a04-8a1c-8b080936ed6c

2024-10-09 16:18:53,431 WARN [main] i.k.u.c.a.DisabledAuthSecurityConfig: Authentication is disabled. Access will be unrestricted. 2024-10-09 16:18:54,026 INFO [main] o.s.b.a.e.w.EndpointLinksResolver: Exposing 3 endpoint(s) beneath base path '/actuator' 2024-10-09 16:18:55,329 INFO [main] o.s.b.w.e.n.NettyWebServer: Netty started on port 8080 2024-10-09 16:18:55,396 INFO [main] i.k.u.KafkaUiApplication: Started KafkaUiApplication in 19.748 seconds (process running for 22.118) 2024-10-09 16:18:55,921 DEBUG [parallel-2] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:18:55,940 INFO [parallel-2] o.a.k.c.a.AdminClientConfig: AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.servers = [kafka-broker-1:29091, kafka-broker:29092, kafka-broker-2:29093] client.dns.lookup = use_all_dns_ips client.id = kafbat-ui-admin-1728490735-1 connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS

2024-10-09 16:18:56,064 INFO [parallel-2] o.a.k.c.u.AppInfoParser: Kafka version: 3.5.2 2024-10-09 16:18:56,064 INFO [parallel-2] o.a.k.c.u.AppInfoParser: Kafka commitId: 8f0b0b0d0466632b 2024-10-09 16:18:56,064 INFO [parallel-2] o.a.k.c.u.AppInfoParser: Kafka startTimeMs: 1728490736059 2024-10-09 16:18:58,108 DEBUG [parallel-2] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:18:58,138 DEBUG [boundedElastic-1] i.k.u.e.RangePollingEmitter: Starting polling for ConsumerPosition[pollingMode=LATEST, topic=eoms.sequencer, partitions=[], timestamp=null, offsets=null] 2024-10-09 16:18:58,178 INFO [boundedElastic-1] o.a.k.c.c.ConsumerConfig: ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka-broker-1:29091, kafka-broker:29092, kafka-broker-2:29093] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = kafbat-ui-consumer-1728490738156 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = null group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.BytesDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.BytesDeserializer

2024-10-09 16:18:58,180 DEBUG [boundedElastic-1] i.k.u.e.EnhancedConsumer: [Consumer clientId=kafbat-ui-consumer-1728490738156, groupId=null] Initializing the Kafka consumer 2024-10-09 16:18:58,234 INFO [boundedElastic-1] o.a.k.c.u.AppInfoParser: Kafka version: 3.5.2 2024-10-09 16:18:58,234 INFO [boundedElastic-1] o.a.k.c.u.AppInfoParser: Kafka commitId: 8f0b0b0d0466632b 2024-10-09 16:18:58,234 INFO [boundedElastic-1] o.a.k.c.u.AppInfoParser: Kafka startTimeMs: 1728490738233 2024-10-09 16:18:58,234 DEBUG [boundedElastic-1] i.k.u.e.EnhancedConsumer: [Consumer clientId=kafbat-ui-consumer-1728490738156, groupId=null] Kafka consumer initialized 2024-10-09 16:18:58,296 INFO [boundedElastic-1] o.a.k.c.Metadata: [Consumer clientId=kafbat-ui-consumer-1728490738156, groupId=null] Cluster ID: OJkAGpnXRaaAooIoOfidmg 2024-10-09 16:18:58,342 DEBUG [boundedElastic-1] i.k.u.e.RangePollingEmitter: Starting from offsets {} 2024-10-09 16:18:58,344 DEBUG [boundedElastic-1] i.k.u.e.RangePollingEmitter: Polling finished 2024-10-09 16:18:58,348 INFO [boundedElastic-1] o.a.k.c.m.Metrics: Metrics scheduler closed 2024-10-09 16:18:58,348 INFO [boundedElastic-1] o.a.k.c.m.Metrics: Closing reporter org.apache.kafka.common.metrics.JmxReporter 2024-10-09 16:18:58,348 INFO [boundedElastic-1] o.a.k.c.m.Metrics: Metrics reporters closed 2024-10-09 16:18:58,356 INFO [boundedElastic-1] o.a.k.c.u.AppInfoParser: App info kafka.consumer for kafbat-ui-consumer-1728490738156 unregistered 2024-10-09 16:18:58,356 DEBUG [boundedElastic-1] i.k.u.e.EnhancedConsumer: [Consumer clientId=kafbat-ui-consumer-1728490738156, groupId=null] Kafka consumer has been closed 2024-10-09 16:19:12,634 DEBUG [boundedElastic-2] i.k.u.e.RangePollingEmitter: Starting polling for ConsumerPosition[pollingMode=LATEST, topic=eoms.sequencer, partitions=[], timestamp=null, offsets=null] 2024-10-09 16:19:12,636 INFO [boundedElastic-2] o.a.k.c.c.ConsumerConfig: ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka-broker-1:29091, kafka-broker:29092, kafka-broker-2:29093] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = kafbat-ui-consumer-1728490752634 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = null group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.BytesDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.BytesDeserializer

2024-10-09 16:19:12,636 DEBUG [boundedElastic-2] i.k.u.e.EnhancedConsumer: [Consumer clientId=kafbat-ui-consumer-1728490752634, groupId=null] Initializing the Kafka consumer 2024-10-09 16:19:12,669 INFO [boundedElastic-2] o.a.k.c.u.AppInfoParser: Kafka version: 3.5.2 2024-10-09 16:19:12,670 INFO [boundedElastic-2] o.a.k.c.u.AppInfoParser: Kafka commitId: 8f0b0b0d0466632b 2024-10-09 16:19:12,670 INFO [boundedElastic-2] o.a.k.c.u.AppInfoParser: Kafka startTimeMs: 1728490752669 2024-10-09 16:19:12,671 DEBUG [boundedElastic-2] i.k.u.e.EnhancedConsumer: [Consumer clientId=kafbat-ui-consumer-1728490752634, groupId=null] Kafka consumer initialized 2024-10-09 16:19:12,678 INFO [boundedElastic-2] o.a.k.c.Metadata: [Consumer clientId=kafbat-ui-consumer-1728490752634, groupId=null] Cluster ID: OJkAGpnXRaaAooIoOfidmg 2024-10-09 16:19:12,688 DEBUG [boundedElastic-2] i.k.u.e.RangePollingEmitter: Starting from offsets {} 2024-10-09 16:19:12,689 DEBUG [boundedElastic-2] i.k.u.e.RangePollingEmitter: Polling finished 2024-10-09 16:19:12,689 INFO [boundedElastic-2] o.a.k.c.m.Metrics: Metrics scheduler closed 2024-10-09 16:19:12,690 INFO [boundedElastic-2] o.a.k.c.m.Metrics: Closing reporter org.apache.kafka.common.metrics.JmxReporter 2024-10-09 16:19:12,690 INFO [boundedElastic-2] o.a.k.c.m.Metrics: Metrics reporters closed 2024-10-09 16:19:12,692 INFO [boundedElastic-2] o.a.k.c.u.AppInfoParser: App info kafka.consumer for kafbat-ui-consumer-1728490752634 unregistered 2024-10-09 16:19:12,692 DEBUG [boundedElastic-2] i.k.u.e.EnhancedConsumer: [Consumer clientId=kafbat-ui-consumer-1728490752634, groupId=null] Kafka consumer has been closed 2024-10-09 16:19:15,196 DEBUG [boundedElastic-2] i.k.u.e.RangePollingEmitter: Starting polling for ConsumerPosition[pollingMode=LATEST, topic=eoms.sequencer, partitions=[], timestamp=null, offsets=null] 2024-10-09 16:19:15,200 INFO [boundedElastic-2] o.a.k.c.c.ConsumerConfig: ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka-broker-1:29091, kafka-broker:29092, kafka-broker-2:29093] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = kafbat-ui-consumer-1728490755198 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = null group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.BytesDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.BytesDeserializer

2024-10-09 16:19:15,200 DEBUG [boundedElastic-2] i.k.u.e.EnhancedConsumer: [Consumer clientId=kafbat-ui-consumer-1728490755198, groupId=null] Initializing the Kafka consumer 2024-10-09 16:19:15,211 INFO [boundedElastic-2] o.a.k.c.u.AppInfoParser: Kafka version: 3.5.2 2024-10-09 16:19:15,211 INFO [boundedElastic-2] o.a.k.c.u.AppInfoParser: Kafka commitId: 8f0b0b0d0466632b 2024-10-09 16:19:15,211 INFO [boundedElastic-2] o.a.k.c.u.AppInfoParser: Kafka startTimeMs: 1728490755211 2024-10-09 16:19:15,213 DEBUG [boundedElastic-2] i.k.u.e.EnhancedConsumer: [Consumer clientId=kafbat-ui-consumer-1728490755198, groupId=null] Kafka consumer initialized 2024-10-09 16:19:15,226 INFO [boundedElastic-2] o.a.k.c.Metadata: [Consumer clientId=kafbat-ui-consumer-1728490755198, groupId=null] Cluster ID: OJkAGpnXRaaAooIoOfidmg 2024-10-09 16:19:15,232 DEBUG [boundedElastic-2] i.k.u.e.RangePollingEmitter: Starting from offsets {} 2024-10-09 16:19:15,234 DEBUG [boundedElastic-2] i.k.u.e.RangePollingEmitter: Polling finished 2024-10-09 16:19:15,235 INFO [boundedElastic-2] o.a.k.c.m.Metrics: Metrics scheduler closed 2024-10-09 16:19:15,235 INFO [boundedElastic-2] o.a.k.c.m.Metrics: Closing reporter org.apache.kafka.common.metrics.JmxReporter 2024-10-09 16:19:15,235 INFO [boundedElastic-2] o.a.k.c.m.Metrics: Metrics reporters closed 2024-10-09 16:19:15,237 INFO [boundedElastic-2] o.a.k.c.u.AppInfoParser: App info kafka.consumer for kafbat-ui-consumer-1728490755198 unregistered 2024-10-09 16:19:15,237 DEBUG [boundedElastic-2] i.k.u.e.EnhancedConsumer: [Consumer clientId=kafbat-ui-consumer-1728490755198, groupId=null] Kafka consumer has been closed 2024-10-09 16:19:25,391 DEBUG [parallel-11] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:19:25,436 DEBUG [parallel-12] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:19:55,403 DEBUG [parallel-7] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:19:55,458 DEBUG [parallel-6] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:20:25,389 DEBUG [parallel-3] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:20:25,442 DEBUG [parallel-1] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:20:55,398 DEBUG [parallel-5] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:20:55,452 DEBUG [parallel-6] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:21:25,388 DEBUG [parallel-7] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:21:25,461 DEBUG [parallel-8] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:21:55,399 DEBUG [parallel-9] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:21:55,465 DEBUG [parallel-10] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:22:25,387 DEBUG [parallel-11] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:22:25,442 DEBUG [parallel-12] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:22:55,389 DEBUG [parallel-1] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:22:55,446 DEBUG [parallel-2] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:23:25,386 DEBUG [parallel-3] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:23:25,450 DEBUG [parallel-1] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:23:50,572 DEBUG [boundedElastic-4] i.k.u.e.RangePollingEmitter: Starting polling for ConsumerPosition[pollingMode=LATEST, topic=audit.fix_log, partitions=[], timestamp=null, offsets=null] 2024-10-09 16:23:50,580 INFO [boundedElastic-4] o.a.k.c.c.ConsumerConfig: ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka-broker-1:29091, kafka-broker:29092, kafka-broker-2:29093] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = kafbat-ui-consumer-1728491030577 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = null group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.BytesDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.BytesDeserializer

2024-10-09 16:23:50,580 DEBUG [boundedElastic-4] i.k.u.e.EnhancedConsumer: [Consumer clientId=kafbat-ui-consumer-1728491030577, groupId=null] Initializing the Kafka consumer 2024-10-09 16:23:50,606 INFO [boundedElastic-4] o.a.k.c.u.AppInfoParser: Kafka version: 3.5.2 2024-10-09 16:23:50,606 INFO [boundedElastic-4] o.a.k.c.u.AppInfoParser: Kafka commitId: 8f0b0b0d0466632b 2024-10-09 16:23:50,606 INFO [boundedElastic-4] o.a.k.c.u.AppInfoParser: Kafka startTimeMs: 1728491030605 2024-10-09 16:23:50,607 DEBUG [boundedElastic-4] i.k.u.e.EnhancedConsumer: [Consumer clientId=kafbat-ui-consumer-1728491030577, groupId=null] Kafka consumer initialized 2024-10-09 16:23:50,621 INFO [boundedElastic-4] o.a.k.c.Metadata: [Consumer clientId=kafbat-ui-consumer-1728491030577, groupId=null] Cluster ID: OJkAGpnXRaaAooIoOfidmg 2024-10-09 16:23:50,664 DEBUG [boundedElastic-4] i.k.u.e.RangePollingEmitter: Starting from offsets {} 2024-10-09 16:23:50,666 DEBUG [boundedElastic-4] i.k.u.e.RangePollingEmitter: Polling finished 2024-10-09 16:23:50,671 INFO [boundedElastic-4] o.a.k.c.m.Metrics: Metrics scheduler closed 2024-10-09 16:23:50,671 INFO [boundedElastic-4] o.a.k.c.m.Metrics: Closing reporter org.apache.kafka.common.metrics.JmxReporter 2024-10-09 16:23:50,671 INFO [boundedElastic-4] o.a.k.c.m.Metrics: Metrics reporters closed 2024-10-09 16:23:50,678 INFO [boundedElastic-4] o.a.k.c.u.AppInfoParser: App info kafka.consumer for kafbat-ui-consumer-1728491030577 unregistered 2024-10-09 16:23:50,678 DEBUG [boundedElastic-4] i.k.u.e.EnhancedConsumer: [Consumer clientId=kafbat-ui-consumer-1728491030577, groupId=null] Kafka consumer has been closed 2024-10-09 16:23:55,390 DEBUG [parallel-8] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:23:55,418 DEBUG [parallel-9] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:23:57,234 INFO [kafka-admin-client-thread | kafbat-ui-admin-1728490735-1] o.a.k.c.NetworkClient: [AdminClient clientId=kafbat-ui-admin-1728490735-1] Node -2 disconnected. 2024-10-09 16:24:25,392 DEBUG [parallel-10] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:24:25,442 DEBUG [parallel-11] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:24:55,386 DEBUG [parallel-12] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:24:55,428 DEBUG [parallel-4] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:25:25,389 DEBUG [parallel-2] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:25:25,456 DEBUG [parallel-3] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:25:55,387 DEBUG [parallel-4] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:25:55,429 DEBUG [parallel-5] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local 2024-10-09 16:26:25,389 DEBUG [parallel-6] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: local 2024-10-09 16:26:25,446 DEBUG [parallel-9] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: local

Additional context

No response

github-actions[bot] commented 4 weeks ago

Hi akshatraika-moment! 👋

Welcome, and thank you for opening your first issue in the repo!

Please wait for triaging by our maintainers.

As development is carried out in our spare time, you can support us by sponsoring our activities or even funding the development of specific issues. Sponsorship link

If you plan to raise a PR for this issue, please take a look at our contributing guide.