strimzi / strimzi-kafka-operator

Apache Kafka® running on Kubernetes
https://strimzi.io/
Apache License 2.0
4.74k stars 1.27k forks source link

Strimzi ADFS - Status: “Invalid Token" #2738

Closed daxtergithub closed 4 years ago

daxtergithub commented 4 years ago

Hi We are trying to configure OAuth2.0/OpenID authentication with ADFS. We have configured the ADFS CA Certificate (Self Signed) in the issuer URI and JWKS endpoint URI in the deployment script as provided by the ADFS team along with ADFS CA certificate. Finally, producer application has been able to get the token issued (Client-Credential) from the ADFS server however finally, receives exception {Status: “Invalid Token”}. Same exception is also logged in the Kafka broker logs. Not sure if we are missing anything. Deployment scripts:

listeners:
  plain: {}
  tls: {}
  external:
    type: loadbalancer
    authentication:
      type: oauth
      clientId: ${KAFKA_CLIENT_ID}
      clientSecret:
        key: secret
        secretName: ${KAFKA_SECRET}
      validIssuerUri: https://<ServerIP:Port>/adfs/services/trust
      jwksEndpointUri: https://<ServerIP:Port>/adfs/discovery/keys
      userNameClaim: preferred_username
      disableTlsHostnameVerification: true
      tlsTrustedCertificates:
        - secretName: ${CA_TRUST}      
          certificate: ${CA_CERT}

We followed the keycloak OAuth configuration example provided by Strimzi

Thanks

scholzj commented 4 years ago

@mstruk Any ideas? I have no clue what ADFS is and whether it is supported

mstruk commented 4 years ago

@daxtergithub You're probably the first to try integrating Strimzi OAuth with ADFS - just to let you know how much experience we have with it :)

Could you add after listeners (under spec.kafka) the following logging definition:

    logging:
      type: inline
      loggers:
        log4j.logger.io.strimzi: "DEBUG"

And share your output. It might give us some useful information on what the problem is.

daxtergithub commented 4 years ago

@mstruk Thanks for your help, attached logs

Waiting for the TLS sidecar to get ready
TLS sidecar is not ready yet, waiting for another 1 second
STRIMZI_BROKER_ID=0
Preparing truststore for replication listener
Adding /opt/kafka/cluster-ca-certs/ca.crt to truststore /tmp/kafka/cluster.truststore.p12 with alias ca
Certificate was added to keystore
Preparing truststore for replication listener is complete
Looking for the right CA
Found the right CA: /opt/kafka/cluster-ca-certs/ca.crt
Preparing keystore for replication and clienttls listener
Preparing keystore for replication and clienttls listener is complete
Preparing truststore for clienttls listener
Adding /opt/kafka/client-ca-certs/ca.crt to truststore /tmp/kafka/clients.truststore.p12 with alias ca
Certificate was added to keystore
Preparing truststore for clienttls listener is complete
Preparing truststore for OAuth on external listener
Adding /opt/kafka/certificates/oauth-external-9094-certs/ca-truststore-0/tls.crt to truststore /tmp/kafka/oauth-external-9094.truststore.p12 with alias oauth-0
Certificate was added to keystore
Preparing truststore for OAuth on external listener is complete
Starting Kafka with configuration:
##############################
##############################
# This file is automatically generated by the Strimzi Cluster Operator
# Any changes to this file will be ignored and overwritten!
##############################
##############################

##########
# Broker ID
##########
broker.id=0

##########
# Zookeeper
##########
zookeeper.connect=localhost:2181

##########
# Kafka message logs configuration
##########
log.dirs=/var/lib/kafka/data/kafka-log0

##########
# Replication listener
##########
listener.name.replication-9091.ssl.keystore.location=/tmp/kafka/cluster.keystore.p12
listener.name.replication-9091.ssl.keystore.password=[hidden]
listener.name.replication-9091.ssl.keystore.type=PKCS12
listener.name.replication-9091.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
listener.name.replication-9091.ssl.truststore.password=[hidden]
listener.name.replication-9091.ssl.truststore.type=PKCS12
listener.name.replication-9091.ssl.client.auth=required

##########
# Plain listener
##########

##########
# TLS listener
##########
listener.name.tls-9093.ssl.keystore.location=/tmp/kafka/cluster.keystore.p12
listener.name.tls-9093.ssl.keystore.password=[hidden]
listener.name.tls-9093.ssl.keystore.type=PKCS12

##########
# External listener
##########
listener.name.external-9094.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler
listener.name.external-9094.oauthbearer.sasl.jaas.config=[hidden]
listener.name.external-9094.sasl.enabled.mechanisms=OAUTHBEARER

listener.name.external-9094.ssl.keystore.location=/tmp/kafka/cluster.keystore.p12
listener.name.external-9094.ssl.keystore.password=[hidden]
listener.name.external-9094.ssl.keystore.type=PKCS12

##########
# Common listener configuration
##########
listeners=REPLICATION-9091://0.0.0.0:9091,PLAIN-9092://0.0.0.0:9092,TLS-9093://0.0.0.0:9093,EXTERNAL-9094://0.0.0.0:9094
advertised.listeners=REPLICATION-9091://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc:9091,PLAIN-9092://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc:9092,TLS-9093://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc:9093,EXTERNAL-9094://13.75.172.192:9094
listener.security.protocol.map=REPLICATION-9091:SSL,PLAIN-9092:PLAINTEXT,TLS-9093:SSL,EXTERNAL-9094:SASL_SSL
inter.broker.listener.name=REPLICATION-9091
sasl.enabled.mechanisms=
ssl.secure.random.implementation=SHA1PRNG
ssl.endpoint.identification.algorithm=HTTPS

##########
# User provided configuration
##########
default.replication.factor=1
log.message.format.version=2.4.0
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
2020-03-23 21:37:35,807 INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [main]
2020-03-23 21:37:36,930 DEBUG Metric kafka.server:type=KafkaServer,name=BrokerState added  (io.strimzi.kafka.agent.KafkaAgent) [main]
2020-03-23 21:37:36,934 INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) [main]
2020-03-23 21:37:36,935 INFO starting (kafka.server.KafkaServer) [main]
2020-03-23 21:37:36,936 INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer) [main]
2020-03-23 21:37:36,980 INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient) [main]
2020-03-23 21:37:36,993 INFO Client environment:zookeeper.version=3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,993 INFO Client environment:host.name=my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc.cluster.local (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,993 INFO Client environment:java.version=1.8.0_242 (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,994 INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,994 INFO Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.242.b08-0.el7_7.x86_64/jre (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,994 INFO Client environment:java.class.path=/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/annotations-13.0.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/bcpkix-jdk15on-1.60.jar:/opt/kafka/bin/../libs/bcprov-jdk15on-1.60.jar:/opt/kafka/bin/../libs/commons-cli-1.4.jar:/opt/kafka/bin/../libs/commons-lang-2.6.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-2.4.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.4.0.jar:/opt/kafka/bin/../libs/connect-file-2.4.0.jar:/opt/kafka/bin/../libs/connect-json-2.4.0.jar:/opt/kafka/bin/../libs/connect-mirror-2.4.0.jar:/opt/kafka/bin/../libs/connect-mirror-client-2.4.0.jar:/opt/kafka/bin/../libs/connect-runtime-2.4.0.jar:/opt/kafka/bin/../libs/connect-transforms-2.4.0.jar:/opt/kafka/bin/../libs/gson-2.8.6.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0.jar:/opt/kafka/bin/../libs/jackson-annotations-2.10.0.jar:/opt/kafka/bin/../libs/jackson-core-2.10.0.jar:/opt/kafka/bin/../libs/jackson-databind-2.10.0.jar:/opt/kafka/bin/../libs/jackson-dataformat-csv-2.10.0.jar:/opt/kafka/bin/../libs/jackson-datatype-jdk8-2.10.0.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.10.0.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.0.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.0.jar:/opt/kafka/bin/../libs/jackson-module-paranamer-2.10.0.jar:/opt/kafka/bin/../libs/jackson-module-scala_2.12-2.10.0.jar:/opt/kafka/bin/../libs/jaeger-client-1.1.0.jar:/opt/kafka/bin/../libs/jaeger-core-1.1.0.jar:/opt/kafka/bin/../libs/jaeger-thrift-1.1.0.jar:/opt/kafka/bin/../libs/jaeger-tracerresolver-1.1.0.jar:/opt/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/opt/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/opt/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/opt/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.28.jar:/opt/kafka/bin/../libs/jersey-common-2.28.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/opt/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/opt/kafka/bin/../libs/jersey-server-2.28.jar:/opt/kafka/bin/../libs/jetty-client-9.4.20.v20190813.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.20.v20190813.jar:/opt/kafka/bin/../libs/jetty-http-9.4.20.v20190813.jar:/opt/kafka/bin/../libs/jetty-io-9.4.20.v20190813.jar:/opt/kafka/bin/../libs/jetty-security-9.4.20.v20190813.jar:/opt/kafka/bin/../libs/jetty-server-9.4.20.v20190813.jar:/opt/kafka/bin/../libs/jetty-servlet-9.4.20.v20190813.jar:/opt/kafka/bin/../libs/jetty-servlets-9.4.20.v20190813.jar:/opt/kafka/bin/../libs/jetty-util-9.4.20.v20190813.jar:/opt/kafka/bin/../libs/jmx_prometheus_javaagent-0.12.0.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/json-smart-1.1.1.jar:/opt/kafka/bin/../libs/jsonevent-layout-1.7.jar:/opt/kafka/bin/../libs/kafka-agent.jar:/opt/kafka/bin/../libs/kafka-clients-2.4.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-2.4.0.jar:/opt/kafka/bin/../libs/kafka-oauth-client-0.2.0.jar:/opt/kafka/bin/../libs/kafka-oauth-common-0.2.0.jar:/opt/kafka/bin/../libs/kafka-oauth-server-0.2.0.jar:/opt/kafka/bin/../libs/kafka-streams-2.4.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-2.4.0.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.12-2.4.0.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-2.4.0.jar:/opt/kafka/bin/../libs/kafka-tools-2.4.0.jar:/opt/kafka/bin/../libs/kafka_2.12-2.4.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-2.4.0.jar:/opt/kafka/bin/../libs/keycloak-common-7.0.0.jar:/opt/kafka/bin/../libs/keycloak-core-7.0.0.jar:/opt/kafka/bin/../libs/kotlin-stdlib-1.3.50.jar:/opt/kafka/bin/../libs/kotlin-stdlib-common-1.3.50.jar:/opt/kafka/bin/../libs/libthrift-0.13.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.6.0.jar:/opt/kafka/bin/../libs/maven-artifact-3.6.1.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/mirror-maker-agent.jar:/opt/kafka/bin/../libs/netty-buffer-4.1.42.Final.jar:/opt/kafka/bin/../libs/netty-codec-4.1.42.Final.jar:/opt/kafka/bin/../libs/netty-common-4.1.42.Final.jar:/opt/kafka/bin/../libs/netty-handler-4.1.42.Final.jar:/opt/kafka/bin/../libs/netty-resolver-4.1.42.Final.jar:/opt/kafka/bin/../libs/netty-transport-4.1.42.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-epoll-4.1.42.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-unix-common-4.1.42.Final.jar:/opt/kafka/bin/../libs/okhttp-4.2.2.jar:/opt/kafka/bin/../libs/okio-2.2.2.jar:/opt/kafka/bin/../libs/opentracing-api-0.33.0.jar:/opt/kafka/bin/../libs/opentracing-kafka-client-0.1.9.jar:/opt/kafka/bin/../libs/opentracing-noop-0.33.0.jar:/opt/kafka/bin/../libs/opentracing-tracerresolver-0.1.8.jar:/opt/kafka/bin/../libs/opentracing-util-0.33.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/paranamer-2.8.jar:/opt/kafka/bin/../libs/plexus-utils-3.2.0.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.18.3.jar:/opt/kafka/bin/../libs/scala-collection-compat_2.12-2.1.2.jar:/opt/kafka/bin/../libs/scala-java8-compat_2.12-0.9.0.jar:/opt/kafka/bin/../libs/scala-library-2.12.10.jar:/opt/kafka/bin/../libs/scala-logging_2.12-3.9.2.jar:/opt/kafka/bin/../libs/scala-reflect-2.12.10.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.28.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.28.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/opt/kafka/bin/../libs/tracing-agent.jar:/opt/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/opt/kafka/bin/../libs/zookeeper-3.5.6.jar:/opt/kafka/bin/../libs/zookeeper-jute-3.5.6.jar:/opt/kafka/bin/../libs/zstd-jni-1.4.3-1.jar:/opt/kafka/libs/kafka-agent.jar (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,994 INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,994 INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,994 INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,994 INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,994 INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,994 INFO Client environment:os.version=3.10.0-1062.12.1.el7.x86_64 (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,994 INFO Client environment:user.name=? (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,995 INFO Client environment:user.home=? (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,995 INFO Client environment:user.dir=/opt/kafka (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,995 INFO Client environment:os.memory.free=120MB (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,995 INFO Client environment:os.memory.max=4008MB (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,995 INFO Client environment:os.memory.total=128MB (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:36,998 INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@6c80d78a (org.apache.zookeeper.ZooKeeper) [main]
2020-03-23 21:37:37,004 INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) [main]
2020-03-23 21:37:37,016 INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) [main]
2020-03-23 21:37:37,026 INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn) [main]
2020-03-23 21:37:37,029 INFO Starting poller (io.strimzi.kafka.agent.KafkaAgent) [main]
2020-03-23 21:37:37,035 DEBUG Metric kafka.server:type=SessionExpireListener,name=SessionState = CONNECTING (io.strimzi.kafka.agent.KafkaAgent) [KafkaAgentPoller]
2020-03-23 21:37:37,036 INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) [main]
2020-03-23 21:37:37,039 INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [main-SendThread(localhost:2181)]
2020-03-23 21:37:37,045 INFO Socket connection established, initiating session, client: /127.0.0.1:57622, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn) [main-SendThread(localhost:2181)]
2020-03-23 21:37:37,593 INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x2005e8c94d80000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn) [main-SendThread(localhost:2181)]
2020-03-23 21:37:37,601 INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) [main]
2020-03-23 21:37:38,359 INFO Cluster ID = bQnP8Hy9Q7OblUij0WWeLQ (kafka.server.KafkaServer) [main]
2020-03-23 21:37:38,364 WARN No meta.properties file under dir /var/lib/kafka/data/kafka-log0/meta.properties (kafka.server.BrokerMetadataCheckpoint) [main]
2020-03-23 21:37:38,518 INFO KafkaConfig values: 
    advertised.host.name = null
    advertised.listeners = REPLICATION-9091://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc:9091,PLAIN-9092://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc:9092,TLS-9093://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc:9093,EXTERNAL-9094://13.75.172.192:9094
    advertised.port = null
    alter.config.policy.class.name = null
    alter.log.dirs.replication.quota.window.num = 11
    alter.log.dirs.replication.quota.window.size.seconds = 1
    authorizer.class.name = 
    auto.create.topics.enable = true
    auto.leader.rebalance.enable = true
    background.threads = 10
    broker.id = 0
    broker.id.generation.enable = true
    broker.rack = null
    client.quota.callback.class = null
    compression.type = producer
    connection.failed.authentication.delay.ms = 100
    connections.max.idle.ms = 600000
    connections.max.reauth.ms = 0
    control.plane.listener.name = null
    controlled.shutdown.enable = true
    controlled.shutdown.max.retries = 3
    controlled.shutdown.retry.backoff.ms = 5000
    controller.socket.timeout.ms = 30000
    create.topic.policy.class.name = null
    default.replication.factor = 1
    delegation.token.expiry.check.interval.ms = 3600000
    delegation.token.expiry.time.ms = 86400000
    delegation.token.master.key = null
    delegation.token.max.lifetime.ms = 604800000
    delete.records.purgatory.purge.interval.requests = 1
    delete.topic.enable = true
    fetch.purgatory.purge.interval.requests = 1000
    group.initial.rebalance.delay.ms = 3000
    group.max.session.timeout.ms = 1800000
    group.max.size = 2147483647
    group.min.session.timeout.ms = 6000
    host.name = 
    inter.broker.listener.name = REPLICATION-9091
    inter.broker.protocol.version = 2.4-IV1
    kafka.metrics.polling.interval.secs = 10
    kafka.metrics.reporters = []
    leader.imbalance.check.interval.seconds = 300
    leader.imbalance.per.broker.percentage = 10
    listener.security.protocol.map = REPLICATION-9091:SSL,PLAIN-9092:PLAINTEXT,TLS-9093:SSL,EXTERNAL-9094:SASL_SSL
    listeners = REPLICATION-9091://0.0.0.0:9091,PLAIN-9092://0.0.0.0:9092,TLS-9093://0.0.0.0:9093,EXTERNAL-9094://0.0.0.0:9094
    log.cleaner.backoff.ms = 15000
    log.cleaner.dedupe.buffer.size = 134217728
    log.cleaner.delete.retention.ms = 86400000
    log.cleaner.enable = true
    log.cleaner.io.buffer.load.factor = 0.9
    log.cleaner.io.buffer.size = 524288
    log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
    log.cleaner.max.compaction.lag.ms = 9223372036854775807
    log.cleaner.min.cleanable.ratio = 0.5
    log.cleaner.min.compaction.lag.ms = 0
    log.cleaner.threads = 1
    log.cleanup.policy = [delete]
    log.dir = /tmp/kafka-logs
    log.dirs = /var/lib/kafka/data/kafka-log0
    log.flush.interval.messages = 9223372036854775807
    log.flush.interval.ms = null
    log.flush.offset.checkpoint.interval.ms = 60000
    log.flush.scheduler.interval.ms = 9223372036854775807
    log.flush.start.offset.checkpoint.interval.ms = 60000
    log.index.interval.bytes = 4096
    log.index.size.max.bytes = 10485760
    log.message.downconversion.enable = true
    log.message.format.version = 2.4.0
    log.message.timestamp.difference.max.ms = 9223372036854775807
    log.message.timestamp.type = CreateTime
    log.preallocate = false
    log.retention.bytes = -1
    log.retention.check.interval.ms = 300000
    log.retention.hours = 168
    log.retention.minutes = null
    log.retention.ms = null
    log.roll.hours = 168
    log.roll.jitter.hours = 0
    log.roll.jitter.ms = null
    log.roll.ms = null
    log.segment.bytes = 1073741824
    log.segment.delete.delay.ms = 60000
    max.connections = 2147483647
    max.connections.per.ip = 2147483647
    max.connections.per.ip.overrides = 
    max.incremental.fetch.session.cache.slots = 1000
    message.max.bytes = 1000012
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    min.insync.replicas = 1
    num.io.threads = 8
    num.network.threads = 3
    num.partitions = 1
    num.recovery.threads.per.data.dir = 1
    num.replica.alter.log.dirs.threads = null
    num.replica.fetchers = 1
    offset.metadata.max.bytes = 4096
    offsets.commit.required.acks = -1
    offsets.commit.timeout.ms = 5000
    offsets.load.buffer.size = 5242880
    offsets.retention.check.interval.ms = 600000
    offsets.retention.minutes = 10080
    offsets.topic.compression.codec = 0
    offsets.topic.num.partitions = 50
    offsets.topic.replication.factor = 1
    offsets.topic.segment.bytes = 104857600
    password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
    password.encoder.iterations = 4096
    password.encoder.key.length = 128
    password.encoder.keyfactory.algorithm = null
    password.encoder.old.secret = null
    password.encoder.secret = null
    port = 9092
    principal.builder.class = null
    producer.purgatory.purge.interval.requests = 1000
    queued.max.request.bytes = -1
    queued.max.requests = 500
    quota.consumer.default = 9223372036854775807
    quota.producer.default = 9223372036854775807
    quota.window.num = 11
    quota.window.size.seconds = 1
    replica.fetch.backoff.ms = 1000
    replica.fetch.max.bytes = 1048576
    replica.fetch.min.bytes = 1
    replica.fetch.response.max.bytes = 10485760
    replica.fetch.wait.max.ms = 500
    replica.high.watermark.checkpoint.interval.ms = 5000
    replica.lag.time.max.ms = 10000
    replica.selector.class = null
    replica.socket.receive.buffer.bytes = 65536
    replica.socket.timeout.ms = 30000
    replication.quota.window.num = 11
    replication.quota.window.size.seconds = 1
    request.timeout.ms = 30000
    reserved.broker.max.id = 1000
    sasl.client.callback.handler.class = null
    sasl.enabled.mechanisms = []
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.principal.to.local.rules = [DEFAULT]
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism.inter.broker.protocol = GSSAPI
    sasl.server.callback.handler.class = null
    security.inter.broker.protocol = PLAINTEXT
    security.providers = null
    socket.receive.buffer.bytes = 102400
    socket.request.max.bytes = 104857600
    socket.send.buffer.bytes = 102400
    ssl.cipher.suites = []
    ssl.client.auth = none
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = HTTPS
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.principal.mapping.rules = DEFAULT
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = SHA1PRNG
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
    transaction.max.timeout.ms = 900000
    transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
    transaction.state.log.load.buffer.size = 5242880
    transaction.state.log.min.isr = 2
    transaction.state.log.num.partitions = 50
    transaction.state.log.replication.factor = 1
    transaction.state.log.segment.bytes = 104857600
    transactional.id.expiration.ms = 604800000
    unclean.leader.election.enable = false
    zookeeper.connect = localhost:2181
    zookeeper.connection.timeout.ms = null
    zookeeper.max.in.flight.requests = 10
    zookeeper.session.timeout.ms = 6000
    zookeeper.set.acl = false
    zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig) [main]
2020-03-23 21:37:38,555 INFO KafkaConfig values: 
    advertised.host.name = null
    advertised.listeners = REPLICATION-9091://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc:9091,PLAIN-9092://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc:9092,TLS-9093://my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc:9093,EXTERNAL-9094://13.75.172.192:9094
    advertised.port = null
    alter.config.policy.class.name = null
    alter.log.dirs.replication.quota.window.num = 11
    alter.log.dirs.replication.quota.window.size.seconds = 1
    authorizer.class.name = 
    auto.create.topics.enable = true
    auto.leader.rebalance.enable = true
    background.threads = 10
    broker.id = 0
    broker.id.generation.enable = true
    broker.rack = null
    client.quota.callback.class = null
    compression.type = producer
    connection.failed.authentication.delay.ms = 100
    connections.max.idle.ms = 600000
    connections.max.reauth.ms = 0
    control.plane.listener.name = null
    controlled.shutdown.enable = true
    controlled.shutdown.max.retries = 3
    controlled.shutdown.retry.backoff.ms = 5000
    controller.socket.timeout.ms = 30000
    create.topic.policy.class.name = null
    default.replication.factor = 1
    delegation.token.expiry.check.interval.ms = 3600000
    delegation.token.expiry.time.ms = 86400000
    delegation.token.master.key = null
    delegation.token.max.lifetime.ms = 604800000
    delete.records.purgatory.purge.interval.requests = 1
    delete.topic.enable = true
    fetch.purgatory.purge.interval.requests = 1000
    group.initial.rebalance.delay.ms = 3000
    group.max.session.timeout.ms = 1800000
    group.max.size = 2147483647
    group.min.session.timeout.ms = 6000
    host.name = 
    inter.broker.listener.name = REPLICATION-9091
    inter.broker.protocol.version = 2.4-IV1
    kafka.metrics.polling.interval.secs = 10
    kafka.metrics.reporters = []
    leader.imbalance.check.interval.seconds = 300
    leader.imbalance.per.broker.percentage = 10
    listener.security.protocol.map = REPLICATION-9091:SSL,PLAIN-9092:PLAINTEXT,TLS-9093:SSL,EXTERNAL-9094:SASL_SSL
    listeners = REPLICATION-9091://0.0.0.0:9091,PLAIN-9092://0.0.0.0:9092,TLS-9093://0.0.0.0:9093,EXTERNAL-9094://0.0.0.0:9094
    log.cleaner.backoff.ms = 15000
    log.cleaner.dedupe.buffer.size = 134217728
    log.cleaner.delete.retention.ms = 86400000
    log.cleaner.enable = true
    log.cleaner.io.buffer.load.factor = 0.9
    log.cleaner.io.buffer.size = 524288
    log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
    log.cleaner.max.compaction.lag.ms = 9223372036854775807
    log.cleaner.min.cleanable.ratio = 0.5
    log.cleaner.min.compaction.lag.ms = 0
    log.cleaner.threads = 1
    log.cleanup.policy = [delete]
    log.dir = /tmp/kafka-logs
    log.dirs = /var/lib/kafka/data/kafka-log0
    log.flush.interval.messages = 9223372036854775807
    log.flush.interval.ms = null
    log.flush.offset.checkpoint.interval.ms = 60000
    log.flush.scheduler.interval.ms = 9223372036854775807
    log.flush.start.offset.checkpoint.interval.ms = 60000
    log.index.interval.bytes = 4096
    log.index.size.max.bytes = 10485760
    log.message.downconversion.enable = true
    log.message.format.version = 2.4.0
    log.message.timestamp.difference.max.ms = 9223372036854775807
    log.message.timestamp.type = CreateTime
    log.preallocate = false
    log.retention.bytes = -1
    log.retention.check.interval.ms = 300000
    log.retention.hours = 168
    log.retention.minutes = null
    log.retention.ms = null
    log.roll.hours = 168
    log.roll.jitter.hours = 0
    log.roll.jitter.ms = null
    log.roll.ms = null
    log.segment.bytes = 1073741824
    log.segment.delete.delay.ms = 60000
    max.connections = 2147483647
    max.connections.per.ip = 2147483647
    max.connections.per.ip.overrides = 
    max.incremental.fetch.session.cache.slots = 1000
    message.max.bytes = 1000012
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    min.insync.replicas = 1
    num.io.threads = 8
    num.network.threads = 3
    num.partitions = 1
    num.recovery.threads.per.data.dir = 1
    num.replica.alter.log.dirs.threads = null
    num.replica.fetchers = 1
    offset.metadata.max.bytes = 4096
    offsets.commit.required.acks = -1
    offsets.commit.timeout.ms = 5000
    offsets.load.buffer.size = 5242880
    offsets.retention.check.interval.ms = 600000
    offsets.retention.minutes = 10080
    offsets.topic.compression.codec = 0
    offsets.topic.num.partitions = 50
    offsets.topic.replication.factor = 1
    offsets.topic.segment.bytes = 104857600
    password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
    password.encoder.iterations = 4096
    password.encoder.key.length = 128
    password.encoder.keyfactory.algorithm = null
    password.encoder.old.secret = null
    password.encoder.secret = null
    port = 9092
    principal.builder.class = null
    producer.purgatory.purge.interval.requests = 1000
    queued.max.request.bytes = -1
    queued.max.requests = 500
    quota.consumer.default = 9223372036854775807
    quota.producer.default = 9223372036854775807
    quota.window.num = 11
    quota.window.size.seconds = 1
    replica.fetch.backoff.ms = 1000
    replica.fetch.max.bytes = 1048576
    replica.fetch.min.bytes = 1
    replica.fetch.response.max.bytes = 10485760
    replica.fetch.wait.max.ms = 500
    replica.high.watermark.checkpoint.interval.ms = 5000
    replica.lag.time.max.ms = 10000
    replica.selector.class = null
    replica.socket.receive.buffer.bytes = 65536
    replica.socket.timeout.ms = 30000
    replication.quota.window.num = 11
    replication.quota.window.size.seconds = 1
    request.timeout.ms = 30000
    reserved.broker.max.id = 1000
    sasl.client.callback.handler.class = null
    sasl.enabled.mechanisms = []
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.principal.to.local.rules = [DEFAULT]
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism.inter.broker.protocol = GSSAPI
    sasl.server.callback.handler.class = null
    security.inter.broker.protocol = PLAINTEXT
    security.providers = null
    socket.receive.buffer.bytes = 102400
    socket.request.max.bytes = 104857600
    socket.send.buffer.bytes = 102400
    ssl.cipher.suites = []
    ssl.client.auth = none
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = HTTPS
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.principal.mapping.rules = DEFAULT
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = SHA1PRNG
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
    transaction.max.timeout.ms = 900000
    transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
    transaction.state.log.load.buffer.size = 5242880
    transaction.state.log.min.isr = 2
    transaction.state.log.num.partitions = 50
    transaction.state.log.replication.factor = 1
    transaction.state.log.segment.bytes = 104857600
    transactional.id.expiration.ms = 604800000
    unclean.leader.election.enable = false
    zookeeper.connect = localhost:2181
    zookeeper.connection.timeout.ms = null
    zookeeper.max.in.flight.requests = 10
    zookeeper.session.timeout.ms = 6000
    zookeeper.set.acl = false
    zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig) [main]
2020-03-23 21:37:38,611 INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [ThrottledChannelReaper-Produce]
2020-03-23 21:37:38,612 INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [ThrottledChannelReaper-Fetch]
2020-03-23 21:37:38,622 INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [ThrottledChannelReaper-Request]
2020-03-23 21:37:38,662 INFO Log directory /var/lib/kafka/data/kafka-log0 not found, creating it. (kafka.log.LogManager) [main]
2020-03-23 21:37:38,678 INFO Loading logs. (kafka.log.LogManager) [main]
2020-03-23 21:37:38,703 INFO Logs loading complete in 25 ms. (kafka.log.LogManager) [main]
2020-03-23 21:37:38,737 INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) [main]
2020-03-23 21:37:38,749 INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) [main]
2020-03-23 21:37:38,770 INFO Starting the log cleaner (kafka.log.LogCleaner) [main]
2020-03-23 21:37:38,927 INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) [kafka-log-cleaner-thread-0]
2020-03-23 21:37:39,627 INFO Awaiting socket connections on 0.0.0.0:9091. (kafka.network.Acceptor) [main]
2020-03-23 21:37:41,878 INFO [SocketServer brokerId=0] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9091,ListenerName(REPLICATION-9091),SSL) (kafka.network.SocketServer) [main]
2020-03-23 21:37:41,879 INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor) [main]
2020-03-23 21:37:41,942 INFO [SocketServer brokerId=0] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9092,ListenerName(PLAIN-9092),PLAINTEXT) (kafka.network.SocketServer) [main]
2020-03-23 21:37:41,942 INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.Acceptor) [main]
2020-03-23 21:37:42,146 INFO [SocketServer brokerId=0] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9093,ListenerName(TLS-9093),SSL) (kafka.network.SocketServer) [main]
2020-03-23 21:37:42,147 INFO Awaiting socket connections on 0.0.0.0:9094. (kafka.network.Acceptor) [main]
2020-03-23 21:37:43,213 DEBUG Configured JWTSignatureValidator:
    keysEndpointUri: https://<ServerIP:Port>/adfs/discovery/keys 
    sslSocketFactory: sun.security.ssl.SSLSocketFactoryImpl@129b4fe2
    hostnameVerifier: io.strimzi.kafka.oauth.common.SSLUtil$$Lambda$342/538667887@55e8ec2f
    validIssuerUri: https://<ServerIP:Port>/adfs/services/trust 
    certsRefreshSeconds: 300
    certsExpirySeconds: 360
    skipTypeCheck: false (io.strimzi.kafka.oauth.validator.JWTSignatureValidator) [main]
2020-03-23 21:37:43,266 INFO Retrieved token with principal thePrincipalName (org.apache.kafka.common.security.oauthbearer.internals.unsecured.OAuthBearerUnsecuredLoginCallbackHandler) [main]
2020-03-23 21:37:43,270 INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin) [main]
2020-03-23 21:37:43,495 DEBUG Configured JWTSignatureValidator:
    keysEndpointUri: https://<ServerIP:Port>/adfs/discovery/keys 
    sslSocketFactory: sun.security.ssl.SSLSocketFactoryImpl@65aa6596
    hostnameVerifier: io.strimzi.kafka.oauth.common.SSLUtil$$Lambda$342/538667887@55e8ec2f
    validIssuerUri: https://<ServerIP:Port>/adfs/services/trust 
    certsRefreshSeconds: 300
    certsExpirySeconds: 360
    skipTypeCheck: false (io.strimzi.kafka.oauth.validator.JWTSignatureValidator) [main]
2020-03-23 21:37:43,745 DEBUG Configured JWTSignatureValidator:
    keysEndpointUri: https://<ServerIP:Port>/adfs/discovery/keys 
    sslSocketFactory: sun.security.ssl.SSLSocketFactoryImpl@46f699d5
    hostnameVerifier: io.strimzi.kafka.oauth.common.SSLUtil$$Lambda$342/538667887@55e8ec2f
    validIssuerUri: https://<ServerIP:Port>/adfs/services/trust 
    certsRefreshSeconds: 300
    certsExpirySeconds: 360
    skipTypeCheck: false (io.strimzi.kafka.oauth.validator.JWTSignatureValidator) [main]
2020-03-23 21:37:43,825 INFO [SocketServer brokerId=0] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9094,ListenerName(EXTERNAL-9094),SASL_SSL) (kafka.network.SocketServer) [main]
2020-03-23 21:37:43,827 INFO [SocketServer brokerId=0] Started 4 acceptor threads for data-plane (kafka.network.SocketServer) [main]
2020-03-23 21:37:43,909 INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Produce]
2020-03-23 21:37:43,911 INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Fetch]
2020-03-23 21:37:43,912 INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-DeleteRecords]
2020-03-23 21:37:43,928 INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-ElectLeader]
2020-03-23 21:37:43,968 INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) [LogDirFailureHandler]
2020-03-23 21:37:44,001 INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient) [main]
2020-03-23 21:37:44,060 INFO Stat of the created znode at /brokers/ids/0 is: 4294967351,4294967351,1584999464022,1584999464022,1,0,0,144219145961472000,556,0,4294967351
 (kafka.zk.KafkaZkClient) [main]
2020-03-23 21:37:44,061 INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc,9091,ListenerName(REPLICATION-9091),SSL), EndPoint(my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc,9092,ListenerName(PLAIN-9092),PLAINTEXT), EndPoint(my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc,9093,ListenerName(TLS-9093),SSL), EndPoint(13.75.172.192,9094,ListenerName(EXTERNAL-9094),SASL_SSL)), czxid (broker epoch): 4294967351 (kafka.zk.KafkaZkClient) [main]
2020-03-23 21:37:44,161 INFO [ControllerEventThread controllerId=0] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) [controller-event-thread]
2020-03-23 21:37:44,199 INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-topic]
2020-03-23 21:37:44,199 INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Heartbeat]
2020-03-23 21:37:44,205 INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Rebalance]
2020-03-23 21:37:44,229 INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) [controller-event-thread]
2020-03-23 21:37:44,286 INFO [Controller id=0] 0 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:44,287 INFO [Controller id=0] Registering handlers (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:44,301 INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator) [main]
2020-03-23 21:37:44,306 INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator) [main]
2020-03-23 21:37:44,364 INFO [Controller id=0] Deleting log dir event notifications (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:44,372 INFO [Controller id=0] Deleting isr change notifications (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:44,382 INFO [Controller id=0] Initializing controller context (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:44,409 INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 100 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-03-23 21:37:44,443 INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager) [main]
2020-03-23 21:37:44,492 INFO [Controller id=0] Initialized broker epochs cache: Map(0 -> 4294967351, 1 -> 4294967350, 2 -> 4294967354) (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:44,504 DEBUG [Controller id=0] Register BrokerModifications handler for Set(0, 1, 2) (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:44,513 DEBUG [Channel manager on controller 0]: Controller 0 trying to connect to broker 0 (kafka.controller.ControllerChannelManager) [controller-event-thread]
2020-03-23 21:37:44,910 INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) [main]
2020-03-23 21:37:44,923 INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) [main]
2020-03-23 21:37:44,941 DEBUG [Channel manager on controller 0]: Controller 0 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) [controller-event-thread]
2020-03-23 21:37:44,949 INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) [TxnMarkerSenderThread-0]
2020-03-23 21:37:44,982 INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-AlterAcls]
2020-03-23 21:37:45,020 INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) [/config/changes-event-process-thread]
2020-03-23 21:37:45,051 INFO [SocketServer brokerId=0] Started data-plane processors for 4 acceptors (kafka.network.SocketServer) [main]
2020-03-23 21:37:45,057 INFO Kafka version: 2.4.0 (org.apache.kafka.common.utils.AppInfoParser) [main]
2020-03-23 21:37:45,057 INFO Kafka commitId: 77a89fcf8d7fa018 (org.apache.kafka.common.utils.AppInfoParser) [main]
2020-03-23 21:37:45,057 INFO Kafka startTimeMs: 1584999465051 (org.apache.kafka.common.utils.AppInfoParser) [main]
2020-03-23 21:37:45,060 INFO [KafkaServer id=0] started (kafka.server.KafkaServer) [main]
2020-03-23 21:37:45,108 DEBUG [Channel manager on controller 0]: Controller 0 trying to connect to broker 2 (kafka.controller.ControllerChannelManager) [controller-event-thread]
2020-03-23 21:37:45,235 INFO [RequestSendThread controllerId=0] Starting (kafka.controller.RequestSendThread) [Controller-0-to-broker-1-send-thread]
2020-03-23 21:37:45,236 INFO [Controller id=0] Currently active brokers in the cluster: Set(0, 1, 2) (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,236 INFO [Controller id=0] Currently shutting brokers in the cluster: Set() (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,237 INFO [Controller id=0] Current list of topics in the cluster: Set() (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,237 INFO [Controller id=0] Fetching topic deletions in progress (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,237 INFO [RequestSendThread controllerId=0] Starting (kafka.controller.RequestSendThread) [Controller-0-to-broker-2-send-thread]
2020-03-23 21:37:45,238 INFO [RequestSendThread controllerId=0] Starting (kafka.controller.RequestSendThread) [Controller-0-to-broker-0-send-thread]
2020-03-23 21:37:45,241 INFO [Controller id=0] List of topics to be deleted:  (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,242 INFO [Controller id=0] List of topics ineligible for deletion:  (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,242 INFO [Controller id=0] Initializing topic deletion manager (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,243 INFO [Topic Deletion Manager 0] Initializing manager with initial deletions: Set(), initial ineligible deletions: Set() (kafka.controller.TopicDeletionManager) [controller-event-thread]
2020-03-23 21:37:45,244 INFO [Controller id=0] Sending update metadata request (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,258 INFO [ReplicaStateMachine controllerId=0] Initializing replica state (kafka.controller.ZkReplicaStateMachine) [controller-event-thread]
2020-03-23 21:37:45,259 INFO [ReplicaStateMachine controllerId=0] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) [controller-event-thread]
2020-03-23 21:37:45,267 INFO [ReplicaStateMachine controllerId=0] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) [controller-event-thread]
2020-03-23 21:37:45,268 DEBUG [ReplicaStateMachine controllerId=0] Started replica state machine with initial state -> Map() (kafka.controller.ZkReplicaStateMachine) [controller-event-thread]
2020-03-23 21:37:45,269 INFO [PartitionStateMachine controllerId=0] Initializing partition state (kafka.controller.ZkPartitionStateMachine) [controller-event-thread]
2020-03-23 21:37:45,275 INFO [PartitionStateMachine controllerId=0] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) [controller-event-thread]
2020-03-23 21:37:45,312 DEBUG [PartitionStateMachine controllerId=0] Started partition state machine with initial state -> Map() (kafka.controller.ZkPartitionStateMachine) [controller-event-thread]
2020-03-23 21:37:45,316 INFO [Controller id=0] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,337 INFO [Controller id=0] Partitions undergoing preferred replica election:  (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,337 INFO [Controller id=0] Partitions that completed preferred replica election:  (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,337 INFO [Controller id=0] Skipping preferred replica election for partitions due to topic deletion:  (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,338 INFO [Controller id=0] Resuming preferred replica election for partitions:  (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,343 INFO [Controller id=0] Starting replica leader election (PREFERRED) for partitions  triggered by ZkTriggered (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,416 INFO [RequestSendThread controllerId=0] Controller 0 connected to my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc:9091 (id: 0 rack: null) for sending state change requests (kafka.controller.RequestSendThread) [Controller-0-to-broker-0-send-thread]
2020-03-23 21:37:45,422 INFO [Controller id=0] Starting the controller scheduler (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:45,457 INFO [RequestSendThread controllerId=0] Controller 0 connected to my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-adfs.svc:9091 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) [Controller-0-to-broker-1-send-thread]
2020-03-23 21:37:45,486 INFO [RequestSendThread controllerId=0] Controller 0 connected to my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-adfs.svc:9091 (id: 2 rack: null) for sending state change requests (kafka.controller.RequestSendThread) [Controller-0-to-broker-2-send-thread]
2020-03-23 21:37:45,617 TRACE [Controller id=0 epoch=1] Received response {error_code=0,_tagged_fields={}} for request UPDATE_METADATA with correlation id 0 sent to broker my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc:9091 (id: 0 rack: null) (state.change.logger) [Controller-0-to-broker-0-send-thread]
2020-03-23 21:37:45,728 TRACE [Controller id=0 epoch=1] Received response {error_code=0,_tagged_fields={}} for request UPDATE_METADATA with correlation id 0 sent to broker my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-adfs.svc:9091 (id: 2 rack: null) (state.change.logger) [Controller-0-to-broker-2-send-thread]
2020-03-23 21:37:45,771 TRACE [Controller id=0 epoch=1] Received response {error_code=0,_tagged_fields={}} for request UPDATE_METADATA with correlation id 0 sent to broker my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-adfs.svc:9091 (id: 1 rack: null) (state.change.logger) [Controller-0-to-broker-1-send-thread]
2020-03-23 21:37:46,045 INFO Running as server according to kafka.server:type=KafkaServer,name=BrokerState => ready (io.strimzi.kafka.agent.KafkaAgent) [KafkaAgentPoller]
2020-03-23 21:37:46,045 DEBUG Exiting thread (io.strimzi.kafka.agent.KafkaAgent) [KafkaAgentPoller]
2020-03-23 21:37:50,425 INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:50,425 TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:37:50,430 DEBUG [Controller id=0] Preferred replicas by broker Map() (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:38:24,247 INFO Creating topic test-topic with configuration {} and initial partition assignment Map(2 -> ArrayBuffer(1, 0, 2), 1 -> ArrayBuffer(2, 1, 0), 0 -> ArrayBuffer(0, 2, 1)) (kafka.zk.AdminZkClient) [data-plane-kafka-request-handler-4]
2020-03-23 21:38:24,340 INFO [Controller id=0] New topics: [Set(test-topic)], deleted topics: [Set()], new partition replica assignment [Map(test-topic-2 -> ReplicaAssignment(replicas=1,0,2, addingReplicas=, removingReplicas=), test-topic-1 -> ReplicaAssignment(replicas=2,1,0, addingReplicas=, removingReplicas=), test-topic-0 -> ReplicaAssignment(replicas=0,2,1, addingReplicas=, removingReplicas=))] (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:38:24,340 INFO [Controller id=0] New partition creation callback for test-topic-2,test-topic-1,test-topic-0 (kafka.controller.KafkaController) [controller-event-thread]
2020-03-23 21:38:24,344 TRACE [Controller id=0 epoch=1] Changed partition test-topic-2 state from NonExistentPartition to NewPartition with assigned replicas 1,0,2 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,345 TRACE [Controller id=0 epoch=1] Changed partition test-topic-1 state from NonExistentPartition to NewPartition with assigned replicas 2,1,0 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,345 TRACE [Controller id=0 epoch=1] Changed partition test-topic-0 state from NonExistentPartition to NewPartition with assigned replicas 0,2,1 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,362 TRACE [Controller id=0 epoch=1] Changed state of replica 2 for partition test-topic-1 from NonExistentReplica to NewReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,362 TRACE [Controller id=0 epoch=1] Changed state of replica 2 for partition test-topic-0 from NonExistentReplica to NewReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,362 TRACE [Controller id=0 epoch=1] Changed state of replica 2 for partition test-topic-2 from NonExistentReplica to NewReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,362 TRACE [Controller id=0 epoch=1] Changed state of replica 1 for partition test-topic-2 from NonExistentReplica to NewReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,362 TRACE [Controller id=0 epoch=1] Changed state of replica 1 for partition test-topic-1 from NonExistentReplica to NewReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,362 TRACE [Controller id=0 epoch=1] Changed state of replica 1 for partition test-topic-0 from NonExistentReplica to NewReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,363 TRACE [Controller id=0 epoch=1] Changed state of replica 0 for partition test-topic-2 from NonExistentReplica to NewReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,363 TRACE [Controller id=0 epoch=1] Changed state of replica 0 for partition test-topic-1 from NonExistentReplica to NewReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,363 TRACE [Controller id=0 epoch=1] Changed state of replica 0 for partition test-topic-0 from NonExistentReplica to NewReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,456 TRACE [Controller id=0 epoch=1] Changed partition test-topic-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1, 0, 2), zkVersion=0) (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,457 TRACE [Controller id=0 epoch=1] Changed partition test-topic-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=2, leaderEpoch=0, isr=List(2, 1, 0), zkVersion=0) (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,457 TRACE [Controller id=0 epoch=1] Changed partition test-topic-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0, 2, 1), zkVersion=0) (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,458 TRACE [Controller id=0 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=1, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], addingReplicas=[], removingReplicas=[], isNew=true) to broker 2 for partition test-topic-1 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,459 TRACE [Controller id=0 epoch=1] Sending become-follower LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=0, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], addingReplicas=[], removingReplicas=[], isNew=true) to broker 2 for partition test-topic-0 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,459 TRACE [Controller id=0 epoch=1] Sending become-follower LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], addingReplicas=[], removingReplicas=[], isNew=true) to broker 2 for partition test-topic-2 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,462 TRACE [Controller id=0 epoch=1] Sending become-follower LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=1, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], addingReplicas=[], removingReplicas=[], isNew=true) to broker 1 for partition test-topic-1 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,462 TRACE [Controller id=0 epoch=1] Sending become-follower LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=0, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], addingReplicas=[], removingReplicas=[], isNew=true) to broker 1 for partition test-topic-0 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,463 TRACE [Controller id=0 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], addingReplicas=[], removingReplicas=[], isNew=true) to broker 1 for partition test-topic-2 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,463 TRACE [Controller id=0 epoch=1] Sending become-follower LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=1, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], addingReplicas=[], removingReplicas=[], isNew=true) to broker 0 for partition test-topic-1 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,463 TRACE [Controller id=0 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=0, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], addingReplicas=[], removingReplicas=[], isNew=true) to broker 0 for partition test-topic-0 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,463 TRACE [Controller id=0 epoch=1] Sending become-follower LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], addingReplicas=[], removingReplicas=[], isNew=true) to broker 0 for partition test-topic-2 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,475 TRACE [Controller id=0 epoch=1] Sending UpdateMetadata request UpdateMetadataPartitionState(topicName='test-topic', partitionIndex=1, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], offlineReplicas=[]) to brokers Set(0, 1, 2) for partition test-topic-1 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,475 TRACE [Controller id=0 epoch=1] Sending UpdateMetadata request UpdateMetadataPartitionState(topicName='test-topic', partitionIndex=0, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], offlineReplicas=[]) to brokers Set(0, 1, 2) for partition test-topic-0 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,476 TRACE [Controller id=0 epoch=1] Sending UpdateMetadata request UpdateMetadataPartitionState(topicName='test-topic', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], offlineReplicas=[]) to brokers Set(0, 1, 2) for partition test-topic-2 (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,484 TRACE [Controller id=0 epoch=1] Changed state of replica 2 for partition test-topic-1 from NewReplica to OnlineReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,484 TRACE [Controller id=0 epoch=1] Changed state of replica 2 for partition test-topic-0 from NewReplica to OnlineReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,485 TRACE [Controller id=0 epoch=1] Changed state of replica 2 for partition test-topic-2 from NewReplica to OnlineReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,485 TRACE [Controller id=0 epoch=1] Changed state of replica 1 for partition test-topic-2 from NewReplica to OnlineReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,485 TRACE [Controller id=0 epoch=1] Changed state of replica 1 for partition test-topic-1 from NewReplica to OnlineReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,485 TRACE [Controller id=0 epoch=1] Changed state of replica 1 for partition test-topic-0 from NewReplica to OnlineReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,485 TRACE [Controller id=0 epoch=1] Changed state of replica 0 for partition test-topic-2 from NewReplica to OnlineReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,485 TRACE [Controller id=0 epoch=1] Changed state of replica 0 for partition test-topic-1 from NewReplica to OnlineReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,485 TRACE [Controller id=0 epoch=1] Changed state of replica 0 for partition test-topic-0 from NewReplica to OnlineReplica (state.change.logger) [controller-event-thread]
2020-03-23 21:38:24,493 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=1, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 1 from controller 0 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:24,493 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=0, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 1 from controller 0 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:24,493 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 1 from controller 0 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:24,594 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 1 from controller 0 epoch 1 starting the become-leader transition for partition test-topic-0 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:24,608 INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(test-topic-0) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:24,974 INFO [Log partition=test-topic-0, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,002 INFO [Log partition=test-topic-0, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 269 ms (kafka.log.Log) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,007 INFO Created log for partition test-topic-0 in /var/lib/kafka/data/kafka-log0/test-topic-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,009 INFO [Partition test-topic-0 broker=0] No checkpointed highwatermark is found for partition test-topic-0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,010 INFO [Partition test-topic-0 broker=0] Log loaded for partition test-topic-0 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,011 INFO [Partition test-topic-0 broker=0] test-topic-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,076 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 0 epoch 1 with correlation id 1 for partition test-topic-0 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,082 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 1 from controller 0 epoch 1 for the become-leader transition for partition test-topic-0 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,085 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 1 from controller 0 epoch 1 starting the become-follower transition for partition test-topic-2 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,085 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 1 from controller 0 epoch 1 starting the become-follower transition for partition test-topic-1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,140 INFO [Log partition=test-topic-2, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,141 INFO [Log partition=test-topic-2, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 22 ms (kafka.log.Log) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,152 INFO Created log for partition test-topic-2 in /var/lib/kafka/data/kafka-log0/test-topic-2 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,152 INFO [Partition test-topic-2 broker=0] No checkpointed highwatermark is found for partition test-topic-2 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,152 INFO [Partition test-topic-2 broker=0] Log loaded for partition test-topic-2 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,182 INFO [Log partition=test-topic-1, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,183 INFO [Log partition=test-topic-1, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,186 INFO Created log for partition test-topic-1 in /var/lib/kafka/data/kafka-log0/test-topic-1 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,186 INFO [Partition test-topic-1 broker=0] No checkpointed highwatermark is found for partition test-topic-1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,187 INFO [Partition test-topic-1 broker=0] Log loaded for partition test-topic-1 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,188 INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(test-topic-2, test-topic-1) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,193 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 0 epoch 1 with correlation id 1 for partition test-topic-2 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,193 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 0 epoch 1 with correlation id 1 for partition test-topic-1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,202 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition test-topic-2 as part of become-follower request with correlation id 1 from controller 0 epoch 1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,202 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition test-topic-1 as part of become-follower request with correlation id 1 from controller 0 epoch 1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,546 INFO [ReplicaFetcherManager on broker 0] Added fetcher to broker BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-adfs.svc:9091) for partitions Map(test-topic-2 -> (offset=0, leaderEpoch=0)) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,549 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-03-23 21:38:25,586 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Truncating partition test-topic-2 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-03-23 21:38:25,647 INFO [Log partition=test-topic-2, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-1]
2020-03-23 21:38:25,902 INFO [ReplicaFetcherManager on broker 0] Added fetcher to broker BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-adfs.svc:9091) for partitions Map(test-topic-1 -> (offset=0, leaderEpoch=0)) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,906 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 0 epoch 1 with correlation id 1 for partition test-topic-1 with leader BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-adfs.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,906 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 0 epoch 1 with correlation id 1 for partition test-topic-2 with leader BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-adfs.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,907 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-03-23 21:38:25,907 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition test-topic-1 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-03-23 21:38:25,907 INFO [Log partition=test-topic-1, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-2]
2020-03-23 21:38:25,910 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 1 from controller 0 epoch 1 for the become-follower transition for partition test-topic-2 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,910 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 1 from controller 0 epoch 1 for the become-follower transition for partition test-topic-1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-03-23 21:38:25,962 TRACE [Controller id=0 epoch=1] Received response {error_code=0,partition_errors=[{topic_name=test-topic,partition_index=1,error_code=0,_tagged_fields={}},{topic_name=test-topic,partition_index=0,error_code=0,_tagged_fields={}},{topic_name=test-topic,partition_index=2,error_code=0,_tagged_fields={}}],_tagged_fields={}} for request LEADER_AND_ISR with correlation id 1 sent to broker my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc:9091 (id: 0 rack: null) (state.change.logger) [Controller-0-to-broker-0-send-thread]
2020-03-23 21:38:25,968 TRACE [Controller id=0 epoch=1] Received response {error_code=0,partition_errors=[{topic_name=test-topic,partition_index=1,error_code=0,_tagged_fields={}},{topic_name=test-topic,partition_index=0,error_code=0,_tagged_fields={}},{topic_name=test-topic,partition_index=2,error_code=0,_tagged_fields={}}],_tagged_fields={}} for request LEADER_AND_ISR with correlation id 1 sent to broker my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-adfs.svc:9091 (id: 2 rack: null) (state.change.logger) [Controller-0-to-broker-2-send-thread]
2020-03-23 21:38:26,032 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='test-topic', partitionIndex=1, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], offlineReplicas=[]) for partition test-topic-1 in response to UpdateMetadata request sent by controller 0 epoch 1 with correlation id 2 (state.change.logger) [data-plane-kafka-request-handler-4]
2020-03-23 21:38:26,032 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='test-topic', partitionIndex=0, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], offlineReplicas=[]) for partition test-topic-0 in response to UpdateMetadata request sent by controller 0 epoch 1 with correlation id 2 (state.change.logger) [data-plane-kafka-request-handler-4]
2020-03-23 21:38:26,032 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='test-topic', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], offlineReplicas=[]) for partition test-topic-2 in response to UpdateMetadata request sent by controller 0 epoch 1 with correlation id 2 (state.change.logger) [data-plane-kafka-request-handler-4]
2020-03-23 21:38:26,055 TRACE [Controller id=0 epoch=1] Received response {error_code=0,_tagged_fields={}} for request UPDATE_METADATA with correlation id 2 sent to broker my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-adfs.svc:9091 (id: 2 rack: null) (state.change.logger) [Controller-0-to-broker-2-send-thread]
2020-03-23 21:38:26,095 TRACE [Controller id=0 epoch=1] Received response {error_code=0,_tagged_fields={}} for request UPDATE_METADATA with correlation id 2 sent to broker my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-adfs.svc:9091 (id: 0 rack: null) (state.change.logger) [Controller-0-to-broker-0-send-thread]
2020-03-23 21:38:26,190 ERROR [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition test-topic-2 at offset 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
2020-03-23 21:38:26,338 TRACE [Controller id=0 epoch=1] Received response {error_code=0,partition_errors=[{topic_name=test-topic,partition_index=1,error_code=0,_tagged_fields={}},{topic_name=test-topic,partition_index=0,error_code=0,_tagged_fields={}},{topic_name=test-topic,partition_index=2,error_code=0,_tagged_fields={}}],_tagged_fields={}} for request LEADER_AND_ISR with correlation id 1 sent to broker my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-adfs.svc:9091 (id: 1 rack: null) (state.change.logger) [Controller-0-to-broker-1-send-thread]
2020-03-23 21:38:26,368 TRACE [Controller id=0 epoch=1] Received response {error_code=0,_tagged_fields={}} for request UPDATE_METADATA with correlation id 2 sent to broker my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-adfs.svc:9091 (id: 1 rack: null) (state.change.logger) [Controller-0-to-broker-1-send-thread]

Error when recieved message from Producer

2020-03-23 22:30:00,112 DEBUG Token: {"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0 "} (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-10]
2020-03-23 22:30:00,155 DEBUG [IGNORED] Failed to parse JWT token's payload (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-10]
org.keycloak.jose.jws.JWSInputException: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:104)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.debugLogToken(JaasServerOauthValidatorCallbackHandler.java:242)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handleCallback(JaasServerOauthValidatorCallbackHandler.java:151)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handle(JaasServerOauthValidatorCallbackHandler.java:137)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.process(OAuthBearerSaslServer.java:156)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.evaluateResponse(OAuthBearerSaslServer.java:101)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:451)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:291)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
    at kafka.network.Processor.poll(SocketServer.scala:890)
    at kafka.network.Processor.run(SocketServer.scala:789)
    at java.lang.Thread.run(Thread.java:748)
Caused by: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at com.fasterxml.jackson.databind.exc.InvalidFormatException.from(InvalidFormatException.java:67)
    at com.fasterxml.jackson.databind.DeserializationContext.weirdStringException(DeserializationContext.java:1676)
    at com.fasterxml.jackson.databind.DeserializationContext.handleWeirdStringValue(DeserializationContext.java:932)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer._parseInteger(NumberDeserializers.java:522)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:474)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:452)
    at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
    at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4202)
    at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3266)
    at org.keycloak.util.JsonSerialization.readValue(JsonSerialization.java:71)
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:102)
    ... 13 more
2020-03-23 22:30:00,163 DEBUG Validation failed for token: eyJ0**UpVw (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-10]
io.strimzi.kafka.oauth.validator.TokenValidationException: Token signature validation failed: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6InJyNWJrQXQyRDlDenB6RlktU2pqc2h0cWctdyJ9.eyJhdWQiOiJ1cm46bWljcm9zb2Z0OnVzZXJpbmZvIiwiaXNzIjoiaHR0cDovLzE3Mi4yOC41MC40L2FkZnMvc2VydmljZXMvdHJ1c3QiLCJpYXQiOjE1ODUwMDI1OTMsImV4cCI6MTU4NTAwNjE5MywiYXBwdHlwZSI6IkNvbmZpZGVudGlhbCIsImFwcGlkIjoia2Fma2EtcHJvZHVjZXIiLCJhdXRobWV0aG9kIjoiaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS93cy8yMDA4LzA2L2lkZW50aXR5L2F1dGhlbnRpY2F0aW9ubWV0aG9kL3Bhc3N3b3JkIiwiYXV0aF90aW1lIjoiMjAyMC0wMy0yM1QyMjoyOTo1My44ODNaIiwidmVyIjoiMS4wIn0.VTT7hHB3zWieogH5dZxaoT_eOcB0-cXwVoVI71xpvRlB5wQwBHd6-0xqdQdI5O-h-nzdKouroJY-YZ7t0P45Pio663j1GuVlkot8aPU37XomFd_k38iDxN1F2lMckAUIsFFwHqqf5CBjWawK04epqTibxbG7jHE36s-ewA7JaRLIgN0iXr0JKO-jUqW-s0QAJwU4h_UO0tUv6ttSS2wwvaUTyuM-H0jILwV7_rwVx9n8SCi74-4_alxjpskLKB7L3QZKu2ISdinpqFWppoMvJaRio2B78ezw0y9Ng3sv76MIivbvOnaOieqWAgMcog6cDzAqM4ApGz-4jYWeJWUpVw
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:164)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.validateToken(JaasServerOauthValidatorCallbackHandler.java:224)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handleCallback(JaasServerOauthValidatorCallbackHandler.java:154)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handle(JaasServerOauthValidatorCallbackHandler.java:137)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.process(OAuthBearerSaslServer.java:156)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.evaluateResponse(OAuthBearerSaslServer.java:101)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:451)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:291)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
    at kafka.network.Processor.poll(SocketServer.scala:890)
    at kafka.network.Processor.run(SocketServer.scala:789)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.keycloak.common.VerificationException: Failed to read access token from JWT
    at org.keycloak.TokenVerifier.parse(TokenVerifier.java:402)
    at org.keycloak.TokenVerifier.getHeader(TokenVerifier.java:416)
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:162)
    ... 13 more
Caused by: org.keycloak.jose.jws.JWSInputException: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:104)
    at org.keycloak.TokenVerifier.parse(TokenVerifier.java:400)
    ... 15 more
Caused by: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at com.fasterxml.jackson.databind.exc.InvalidFormatException.from(InvalidFormatException.java:67)
    at com.fasterxml.jackson.databind.DeserializationContext.weirdStringException(DeserializationContext.java:1676)
    at com.fasterxml.jackson.databind.DeserializationContext.handleWeirdStringValue(DeserializationContext.java:932)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer._parseInteger(NumberDeserializers.java:522)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:474)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:452)
    at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
    at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4202)
    at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3266)
    at org.keycloak.util.JsonSerialization.readValue(JsonSerialization.java:71)
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:102)
    ... 16 more
2020-03-23 22:30:00,168 INFO [SocketServer brokerId=2] Failed authentication with gateway/10.130.0.1 ({"status":"invalid_token"}) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-10]
2020-03-23 22:30:04,542 DEBUG Token: {"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0 "} (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
2020-03-23 22:30:04,543 DEBUG [IGNORED] Failed to parse JWT token's payload (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
org.keycloak.jose.jws.JWSInputException: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:104)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.debugLogToken(JaasServerOauthValidatorCallbackHandler.java:242)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handleCallback(JaasServerOauthValidatorCallbackHandler.java:151)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handle(JaasServerOauthValidatorCallbackHandler.java:137)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.process(OAuthBearerSaslServer.java:156)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.evaluateResponse(OAuthBearerSaslServer.java:101)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:451)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:291)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
    at kafka.network.Processor.poll(SocketServer.scala:890)
    at kafka.network.Processor.run(SocketServer.scala:789)
    at java.lang.Thread.run(Thread.java:748)
Caused by: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at com.fasterxml.jackson.databind.exc.InvalidFormatException.from(InvalidFormatException.java:67)
    at com.fasterxml.jackson.databind.DeserializationContext.weirdStringException(DeserializationContext.java:1676)
    at com.fasterxml.jackson.databind.DeserializationContext.handleWeirdStringValue(DeserializationContext.java:932)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer._parseInteger(NumberDeserializers.java:522)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:474)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:452)
    at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
    at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4202)
    at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3266)
    at org.keycloak.util.JsonSerialization.readValue(JsonSerialization.java:71)
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:102)
    ... 13 more
2020-03-23 22:30:04,544 DEBUG Validation failed for token: eyJ0**UpVw (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
io.strimzi.kafka.oauth.validator.TokenValidationException: Token signature validation failed: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6InJyNWJrQXQyRDlDenB6RlktU2pqc2h0cWctdyJ9.eyJhdWQiOiJ1cm46bWljcm9zb2Z0OnVzZXJpbmZvIiwiaXNzIjoiaHR0cDovLzE3Mi4yOC41MC40L2FkZnMvc2VydmljZXMvdHJ1c3QiLCJpYXQiOjE1ODUwMDI1OTMsImV4cCI6MTU4NTAwNjE5MywiYXBwdHlwZSI6IkNvbmZpZGVudGlhbCIsImFwcGlkIjoia2Fma2EtcHJvZHVjZXIiLCJhdXRobWV0aG9kIjoiaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS93cy8yMDA4LzA2L2lkZW50aXR5L2F1dGhlbnRpY2F0aW9ubWV0aG9kL3Bhc3N3b3JkIiwiYXV0aF90aW1lIjoiMjAyMC0wMy0yM1QyMjoyOTo1My44ODNaIiwidmVyIjoiMS4wIn0.VTT7hHB3zWieogH5dZxaoT_eOcB0-cXwVoVI71xpvRlB5wQwBHd6-0xqdQdI5O-h-nzdKouroJY-YZ7t0P45Pio663j1GuVlkot8aPU37XomFd_k38iDxN1F2lMckAUIsFFwHqqf5CBjWawK04epqTibxbG7jHE36s-ewA7JaRLIgN0iXr0JKO-jUqW-s0QAJwU4h_UO0tUv6ttSS2wwvaUTyuM-H0jILwV7_rwVx9n8SCi74-4_alxjpskLKB7L3QZKu2ISdinpqFWppoMvJaRio2B78ezw0y9Ng3sv76MIivbvOnaOieqWAgMcog6cDzAqM4ApGz-4jYWeJWUpVw
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:164)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.validateToken(JaasServerOauthValidatorCallbackHandler.java:224)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handleCallback(JaasServerOauthValidatorCallbackHandler.java:154)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handle(JaasServerOauthValidatorCallbackHandler.java:137)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.process(OAuthBearerSaslServer.java:156)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.evaluateResponse(OAuthBearerSaslServer.java:101)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:451)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:291)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
    at kafka.network.Processor.poll(SocketServer.scala:890)
    at kafka.network.Processor.run(SocketServer.scala:789)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.keycloak.common.VerificationException: Failed to read access token from JWT
    at org.keycloak.TokenVerifier.parse(TokenVerifier.java:402)
    at org.keycloak.TokenVerifier.getHeader(TokenVerifier.java:416)
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:162)
    ... 13 more
Caused by: org.keycloak.jose.jws.JWSInputException: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:104)
    at org.keycloak.TokenVerifier.parse(TokenVerifier.java:400)
    ... 15 more
Caused by: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at com.fasterxml.jackson.databind.exc.InvalidFormatException.from(InvalidFormatException.java:67)
    at com.fasterxml.jackson.databind.DeserializationContext.weirdStringException(DeserializationContext.java:1676)
    at com.fasterxml.jackson.databind.DeserializationContext.handleWeirdStringValue(DeserializationContext.java:932)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer._parseInteger(NumberDeserializers.java:522)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:474)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:452)
    at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
    at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4202)
    at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3266)
    at org.keycloak.util.JsonSerialization.readValue(JsonSerialization.java:71)
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:102)
    ... 16 more
2020-03-23 22:30:04,547 INFO [SocketServer brokerId=2] Failed authentication with gateway/10.130.0.1 ({"status":"invalid_token"}) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
2020-03-23 22:30:06,000 DEBUG Token: {"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0 "} (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
2020-03-23 22:30:06,000 DEBUG [IGNORED] Failed to parse JWT token's payload (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
org.keycloak.jose.jws.JWSInputException: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:104)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.debugLogToken(JaasServerOauthValidatorCallbackHandler.java:242)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handleCallback(JaasServerOauthValidatorCallbackHandler.java:151)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handle(JaasServerOauthValidatorCallbackHandler.java:137)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.process(OAuthBearerSaslServer.java:156)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.evaluateResponse(OAuthBearerSaslServer.java:101)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:451)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:291)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
    at kafka.network.Processor.poll(SocketServer.scala:890)
    at kafka.network.Processor.run(SocketServer.scala:789)
    at java.lang.Thread.run(Thread.java:748)
Caused by: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at com.fasterxml.jackson.databind.exc.InvalidFormatException.from(InvalidFormatException.java:67)
    at com.fasterxml.jackson.databind.DeserializationContext.weirdStringException(DeserializationContext.java:1676)
    at com.fasterxml.jackson.databind.DeserializationContext.handleWeirdStringValue(DeserializationContext.java:932)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer._parseInteger(NumberDeserializers.java:522)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:474)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:452)
    at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
    at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4202)
    at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3266)
    at org.keycloak.util.JsonSerialization.readValue(JsonSerialization.java:71)
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:102)
    ... 13 more
2020-03-23 22:30:06,002 DEBUG Validation failed for token: eyJ0**UpVw (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
io.strimzi.kafka.oauth.validator.TokenValidationException: Token signature validation failed: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6InJyNWJrQXQyRDlDenB6RlktU2pqc2h0cWctdyJ9.eyJhdWQiOiJ1cm46bWljcm9zb2Z0OnVzZXJpbmZvIiwiaXNzIjoiaHR0cDovLzE3Mi4yOC41MC40L2FkZnMvc2VydmljZXMvdHJ1c3QiLCJpYXQiOjE1ODUwMDI1OTMsImV4cCI6MTU4NTAwNjE5MywiYXBwdHlwZSI6IkNvbmZpZGVudGlhbCIsImFwcGlkIjoia2Fma2EtcHJvZHVjZXIiLCJhdXRobWV0aG9kIjoiaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS93cy8yMDA4LzA2L2lkZW50aXR5L2F1dGhlbnRpY2F0aW9ubWV0aG9kL3Bhc3N3b3JkIiwiYXV0aF90aW1lIjoiMjAyMC0wMy0yM1QyMjoyOTo1My44ODNaIiwidmVyIjoiMS4wIn0.VTT7hHB3zWieogH5dZxaoT_eOcB0-cXwVoVI71xpvRlB5wQwBHd6-0xqdQdI5O-h-nzdKouroJY-YZ7t0P45Pio663j1GuVlkot8aPU37XomFd_k38iDxN1F2lMckAUIsFFwHqqf5CBjWawK04epqTibxbG7jHE36s-ewA7JaRLIgN0iXr0JKO-jUqW-s0QAJwU4h_UO0tUv6ttSS2wwvaUTyuM-H0jILwV7_rwVx9n8SCi74-4_alxjpskLKB7L3QZKu2ISdinpqFWppoMvJaRio2B78ezw0y9Ng3sv76MIivbvOnaOieqWAgMcog6cDzAqM4ApGz-4jYWeJWUpVw
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:164)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.validateToken(JaasServerOauthValidatorCallbackHandler.java:224)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handleCallback(JaasServerOauthValidatorCallbackHandler.java:154)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handle(JaasServerOauthValidatorCallbackHandler.java:137)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.process(OAuthBearerSaslServer.java:156)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.evaluateResponse(OAuthBearerSaslServer.java:101)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:451)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:291)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
    at kafka.network.Processor.poll(SocketServer.scala:890)
    at kafka.network.Processor.run(SocketServer.scala:789)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.keycloak.common.VerificationException: Failed to read access token from JWT
    at org.keycloak.TokenVerifier.parse(TokenVerifier.java:402)
    at org.keycloak.TokenVerifier.getHeader(TokenVerifier.java:416)
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:162)
    ... 13 more
Caused by: org.keycloak.jose.jws.JWSInputException: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:104)
    at org.keycloak.TokenVerifier.parse(TokenVerifier.java:400)
    ... 15 more
Caused by: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at com.fasterxml.jackson.databind.exc.InvalidFormatException.from(InvalidFormatException.java:67)
    at com.fasterxml.jackson.databind.DeserializationContext.weirdStringException(DeserializationContext.java:1676)
    at com.fasterxml.jackson.databind.DeserializationContext.handleWeirdStringValue(DeserializationContext.java:932)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer._parseInteger(NumberDeserializers.java:522)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:474)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:452)
    at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
    at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4202)
    at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3266)
    at org.keycloak.util.JsonSerialization.readValue(JsonSerialization.java:71)
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:102)
    ... 16 more
2020-03-23 22:30:06,006 INFO [SocketServer brokerId=2] Failed authentication with gateway/10.130.0.1 ({"status":"invalid_token"}) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
2020-03-23 22:30:07,403 DEBUG Token: {"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0 "} (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-10]
2020-03-23 22:30:07,403 DEBUG [IGNORED] Failed to parse JWT token's payload (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-10]
org.keycloak.jose.jws.JWSInputException: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:104)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.debugLogToken(JaasServerOauthValidatorCallbackHandler.java:242)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handleCallback(JaasServerOauthValidatorCallbackHandler.java:151)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handle(JaasServerOauthValidatorCallbackHandler.java:137)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.process(OAuthBearerSaslServer.java:156)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.evaluateResponse(OAuthBearerSaslServer.java:101)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:451)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:291)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
    at kafka.network.Processor.poll(SocketServer.scala:890)
    at kafka.network.Processor.run(SocketServer.scala:789)
    at java.lang.Thread.run(Thread.java:748)
Caused by: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at com.fasterxml.jackson.databind.exc.InvalidFormatException.from(InvalidFormatException.java:67)
    at com.fasterxml.jackson.databind.DeserializationContext.weirdStringException(DeserializationContext.java:1676)
    at com.fasterxml.jackson.databind.DeserializationContext.handleWeirdStringValue(DeserializationContext.java:932)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer._parseInteger(NumberDeserializers.java:522)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:474)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:452)
    at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
    at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4202)
    at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3266)
    at org.keycloak.util.JsonSerialization.readValue(JsonSerialization.java:71)
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:102)
    ... 13 more
2020-03-23 22:30:07,404 DEBUG Validation failed for token: eyJ0**UpVw (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-10]
io.strimzi.kafka.oauth.validator.TokenValidationException: Token signature validation failed: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6InJyNWJrQXQyRDlDenB6RlktU2pqc2h0cWctdyJ9.eyJhdWQiOiJ1cm46bWljcm9zb2Z0OnVzZXJpbmZvIiwiaXNzIjoiaHR0cDovLzE3Mi4yOC41MC40L2FkZnMvc2VydmljZXMvdHJ1c3QiLCJpYXQiOjE1ODUwMDI1OTMsImV4cCI6MTU4NTAwNjE5MywiYXBwdHlwZSI6IkNvbmZpZGVudGlhbCIsImFwcGlkIjoia2Fma2EtcHJvZHVjZXIiLCJhdXRobWV0aG9kIjoiaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS93cy8yMDA4LzA2L2lkZW50aXR5L2F1dGhlbnRpY2F0aW9ubWV0aG9kL3Bhc3N3b3JkIiwiYXV0aF90aW1lIjoiMjAyMC0wMy0yM1QyMjoyOTo1My44ODNaIiwidmVyIjoiMS4wIn0.VTT7hHB3zWieogH5dZxaoT_eOcB0-cXwVoVI71xpvRlB5wQwBHd6-0xqdQdI5O-h-nzdKouroJY-YZ7t0P45Pio663j1GuVlkot8aPU37XomFd_k38iDxN1F2lMckAUIsFFwHqqf5CBjWawK04epqTibxbG7jHE36s-ewA7JaRLIgN0iXr0JKO-jUqW-s0QAJwU4h_UO0tUv6ttSS2wwvaUTyuM-H0jILwV7_rwVx9n8SCi74-4_alxjpskLKB7L3QZKu2ISdinpqFWppoMvJaRio2B78ezw0y9Ng3sv76MIivbvOnaOieqWAgMcog6cDzAqM4ApGz-4jYWeJWUpVw
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:164)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.validateToken(JaasServerOauthValidatorCallbackHandler.java:224)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handleCallback(JaasServerOauthValidatorCallbackHandler.java:154)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handle(JaasServerOauthValidatorCallbackHandler.java:137)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.process(OAuthBearerSaslServer.java:156)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.evaluateResponse(OAuthBearerSaslServer.java:101)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:451)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:291)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
    at kafka.network.Processor.poll(SocketServer.scala:890)
    at kafka.network.Processor.run(SocketServer.scala:789)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.keycloak.common.VerificationException: Failed to read access token from JWT
    at org.keycloak.TokenVerifier.parse(TokenVerifier.java:402)
    at org.keycloak.TokenVerifier.getHeader(TokenVerifier.java:416)
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:162)
    ... 13 more
Caused by: org.keycloak.jose.jws.JWSInputException: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:104)
    at org.keycloak.TokenVerifier.parse(TokenVerifier.java:400)
    ... 15 more
Caused by: com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "2020-03-23T22:29:53.883Z": not a valid Integer value
 at [Source: (byte[])"{"aud":"urn:microsoft:userinfo","iss":"http://<ServerIP:Port>/adfs/services/trust","iat":1585002593,"exp":1585006193,"apptype":"Confidential","appid":"kafka-producer","authmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password","auth_time":"2020-03-23T22:29:53.883Z","ver":"1.0"}";  line: 1, column: 270] (through reference chain: org.keycloak.representations.AccessToken["auth_time"])
    at com.fasterxml.jackson.databind.exc.InvalidFormatException.from(InvalidFormatException.java:67)
    at com.fasterxml.jackson.databind.DeserializationContext.weirdStringException(DeserializationContext.java:1676)
    at com.fasterxml.jackson.databind.DeserializationContext.handleWeirdStringValue(DeserializationContext.java:932)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer._parseInteger(NumberDeserializers.java:522)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:474)
    at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:452)
    at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
    at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4202)
    at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3266)
    at org.keycloak.util.JsonSerialization.readValue(JsonSerialization.java:71)
    at org.keycloak.jose.jws.JWSInput.readJsonContent(JWSInput.java:102)
    ... 16 more
2020-03-23 22:30:07,407 INFO [SocketServer brokerId=2] Failed authentication with gateway/10.130.0.1 ({"status":"invalid_token"}) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-2-ListenerName(EXTERNAL-9094)-SASL_SSL-10]
mstruk commented 4 years ago

@daxtergithub It appears that auth_time attribute in the token response does not conform to OAuth 2.0 User Authentication and Consent For Clients Draft which is apparently used by the Keycloak Core library that we use for parsing the tokens. Value is supposed to be the number of seconds since 1/1/1970 UTC - an integer. Not a string.

While we make no use of this attribute and could safely skip it, the parsing logic of the library wants to parse it as integer as it encounters it.

There is probably no way for you to influence how this attribute is encoded in the token (some ADFS config)?

daxtergithub commented 4 years ago

Hi @mstruk Thank you for the response and updates. We checked with our ADFS admin team and they advised that they cannot change/influence how these attributes are encoded. So, alternatively, we tried with AZURE Active Directory (cloud-based), and used v1 token API endpoint for JWT token. We received the JWT token which it seems doesn’t include “auth_time” attribute at all. However, now the broker fails exception “Public key not set”. Below are the log details.

2020-03-25 04:57:34,144 DEBUG Token: {"aud":"00000002-0000-0000-c000-000000000000","iss":"https://<url>/","iat":1585111948,"nbf":1585111948,"exp":1585115848,"aio":"42dgYIj7/fbCHMnvx76VR/pd61eeBAA=","appid":"<app-id>","appidacr":"1","idp":"https://<url>","oid":"xxx","sub":"xxx","tenant_region_scope":"OC","tid":"XXX-XXX","uti":"TcEDeHfJB0OTjHK9m6l1AA","ver":"1.0 "} (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
2020-03-25 04:57:34,209 DEBUG Access token expires at (UTC): 2020-03-25T05:57:28 (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
2020-03-25 04:57:34,215 WARN The cached public key with id 'YMELHT0gvb0mxoSDoYfomjqfjYU' is expired! (io.strimzi.kafka.oauth.validator.JWTSignatureValidator) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
2020-03-25 04:57:34,221 DEBUG Validation failed for token: eyJ0**SLPw (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
io.strimzi.kafka.oauth.validator.TokenValidationException: Token validation failed:
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:183)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.validateToken(JaasServerOauthValidatorCallbackHandler.java:224)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handleCallback(JaasServerOauthValidatorCallbackHandler.java:154)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handle(JaasServerOauthValidatorCallbackHandler.java:137)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.process(OAuthBearerSaslServer.java:156)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.evaluateResponse(OAuthBearerSaslServer.java:101)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:451)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:291)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
    at kafka.network.Processor.poll(SocketServer.scala:890)
    at kafka.network.Processor.run(SocketServer.scala:789)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.keycloak.common.VerificationException: Public key not set
    at org.keycloak.TokenVerifier.verifySignature(TokenVerifier.java:437)
    at org.keycloak.TokenVerifier.verify(TokenVerifier.java:462)
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:176)
    ... 13 more
2020-03-25 04:57:34,226 INFO [SocketServer brokerId=0] Failed authentication with gateway/10.130.0.1 ({"status":"invalid_token"}) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
mstruk commented 4 years ago

Looks like the error means that the public key corresponding to the private key used for signing the access token by authorization server is not published in the configured jwksEndpointUri (in your previous log set to https:///adfs/discovery/keys ).

Looks like when changing to AZURE Active Directory v1 token API endpoint you also have to configure a different corresponding jwks endpoint.

daxtergithub commented 4 years ago

@mstruk Thanks for your response. I double-checked, public key is not available at the JWKS URI endpoint.  However, investigating further and comparing with Keycloak configuration, I noticed the similar behavior i.e. the public is not available in the JWKS endpoint. Instead, the public key which is being used to validate the token is available at Issue URI endpoint. Wondering, if it is fetching the public key dynamically from Issuer URI endpoint, not from JWT signature

mstruk commented 4 years ago

@daxtergithub When you configure jwksEndpointUri that's the only source of public keys that JWTSignatureValidator uses. Signed JWT access token contains kid in the header which identifies the key to use for the signature check.

saneera commented 4 years ago

Hi @mstruk Thanks for the response, We have double checked the Token Header and the matched the “kid” value in the JWKS endpoint URI, and the “kid” value is present there. Still token validation failing with the same exception above.

JWKS endpoint URI details snapshot:

{"keys":[{"kty":"RSA","use":"sig","kid":"YMELHT0gvb0mxoSDoYfomjqfjYU","x5t":"YMELHT0gvb0mxoSDoYfomjqfjYU","n":"ni9SAyu9EsltQlV7Jo3wMUddvcpYb4mmfHzV4IsDZ6NQvJjtQJsduhsfqiG86VntMd76R44kCmkfMGvtQRAdd2_UmnVBSSLxQKvcGUqNodH7YaMYOTmHlbOSoVpi3Ox2wj6cWsaTTm_4xzJ3F0yF0Y_aRBMxSCIwLv3nTMRNe74k4zdBnsfdsfsfsY_vUGt_5-sPo6BXoV7oov4Ps6jeyUdRKtqVZSp5_kzz16kPh1Ng_2tn4vpQimNbHRralq8rNM_gOLPAar6v7mL_qsqpgx-48e5ENFxikbB-NzAmLll1QSkzciu2rCjFGH4j_-bCHr7FxUNDL_E0vMFVDFw8SUlYMgQ","e":"AQAB","x5c"

JWT Token Header:


{
  "typ": "JWT",
  "alg": "RS256",
  "x5t": "YMELHT0gvb0mxoSDoYfomjqfjYU",
  "kid": "YMELHT0gvb0mxoSDoYfomjqfjYU"
}

{
  "aud": "00000002-0000-0000-c000-000000000000",
  "iss": "https://sts.windows.net/xxx-xx-xxx",
  "iat": 1585650228,
  "nbf": 1585650228,
  "exp": 1585654128,
  "aio": "42dgYNhtx6CvcEDLVOhQXIMeo/oCAA==",
  "appid": "9e22feae-32d3-488c-be69-0e112149f19b",
  "appidacr": "1",
  "idp": "https://sts.windows.net/xxx-xx-xxx/",
  "oid": "0c5b0cdb-e305-4b59-ae6c-541c2a5b8592",
  "sub": "0c5b0cdb-e305-4b59-ae6c-541c2a5b8592",
  "tenant_region_scope": "<region>",
  "tid": "xxx-xx-xxx",
  "uti": "sDnMmH4mbESWWJ302P8YAA",
  "ver": "1.0"
}
mstruk commented 4 years ago

@saneera Could you share more of the Kafka Broker log, especially any logging by io.strimzi.kafka.oauth.validator.JWTSignatureValidator logger?

The following line means that the keys should have been refreshed already but weren't:

2020-03-25 04:57:34,215 WARN The cached public key with id 'YMELHT0gvb0mxoSDoYfomjqfjYU' is expired! (io.strimzi.kafka.oauth.validator.JWTSignatureValidator) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-9]

This can happen if there is a connectivity issue, but you should see the failed attempt as exception in the log. Something like:

java.lang.RuntimeException: Failed to fetch public keys needed to validate JWT signatures ...

When Kafka Broker is started it loads the keys from JWKS endpoint for the first time. Then it reloads once every oauth.jwks.refresh.seconds. If the initial loading of keys succeeds, then one of subsequent loads apparently fails.

One thing you can do is configure a much longer oauth.jwks.expiry.seconds so that multiple reload attempts can happen. For example:

oauth.jwks.refresh.seconds="300"
oauth.jwks.expiry.seconds="960"

This will attempt the first reload after 5 minutes, the second after 10 minutes, the third after 15 minutes, only then will the initially loaded keys be considered expired - after 16 minutes. This gives you some resiliency when multiple reload attempts fail.

But the thing here is - no reload should ever fail. Your Kafka brokers and clients have to contact your authorization server all the time (whenever a new Kafka connection is established). Unreliable connection to authorization server will just cause constant failures.

daxtergithub commented 4 years ago

@mstruk Thanks for your reply we have re-run the application again, however, we still get the same error, I attached the logs file generated on the cluster from the beginning and I didn't notice any error when cluster starting

2020-04-01 08:49:35,614 DEBUG Configured JWTSignatureValidator:
    keysEndpointUri: https://login.microsoftonline.com/xxx-xxx-xxx/discovery/v2.0/keys 
    sslSocketFactory: sun.security.ssl.SSLSocketFactoryImpl@3dd1dc90
    hostnameVerifier: io.strimzi.kafka.oauth.common.SSLUtil$$Lambda$347/1563053805@a5b0b86
    validIssuerUri: <url>/ 
    certsRefreshSeconds: 300
    certsExpirySeconds: 360
    skipTypeCheck: false (io.strimzi.kafka.oauth.validator.JWTSignatureValidator) [main]
2020-04-01 08:49:35,630 INFO Retrieved token with principal thePrincipalName (org.apache.kafka.common.security.oauthbearer.internals.unsecured.OAuthBearerUnsecuredLoginCallbackHandler) [main]
2020-04-01 08:49:35,632 INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin) [main]
2020-04-01 08:49:35,889 DEBUG Configured JWTSignatureValidator:
    keysEndpointUri: https://login.microsoftonline.com/xxx-xxx-xxx/discovery/v2.0/keys 
    sslSocketFactory: sun.security.ssl.SSLSocketFactoryImpl@470a696f
    hostnameVerifier: io.strimzi.kafka.oauth.common.SSLUtil$$Lambda$347/1563053805@a5b0b86
    validIssuerUri: <url>/ 
    certsRefreshSeconds: 300
    certsExpirySeconds: 360
    skipTypeCheck: false (io.strimzi.kafka.oauth.validator.JWTSignatureValidator) [main]
2020-04-01 08:49:36,140 DEBUG Configured JWTSignatureValidator:
    keysEndpointUri: https://login.microsoftonline.com/xxx-xxx-xxx/discovery/v2.0/keys 
    sslSocketFactory: sun.security.ssl.SSLSocketFactoryImpl@e260766
    hostnameVerifier: io.strimzi.kafka.oauth.common.SSLUtil$$Lambda$347/1563053805@a5b0b86
    validIssuerUri: <url>/ 
    certsRefreshSeconds: 300
    certsExpirySeconds: 360
    skipTypeCheck: false (io.strimzi.kafka.oauth.validator.JWTSignatureValidator) [main]
2020-04-01 08:49:36,153 INFO [SocketServer brokerId=0] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9094,ListenerName(EXTERNAL-9094),SASL_SSL) (kafka.network.SocketServer) [main]
2020-04-01 08:49:36,154 INFO [SocketServer brokerId=0] Started 4 acceptor threads for data-plane (kafka.network.SocketServer) [main]
2020-04-01 08:49:36,191 INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Produce]
2020-04-01 08:49:36,194 INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Fetch]
2020-04-01 08:49:36,197 INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-DeleteRecords]
2020-04-01 08:49:36,199 INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-ElectLeader]
2020-04-01 08:49:36,226 INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) [LogDirFailureHandler]
2020-04-01 08:49:36,292 INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient) [main]
2020-04-01 08:49:36,330 INFO Stat of the created znode at /brokers/ids/0 is: 4294967384,4294967384,1585730976307,1585730976307,1,0,0,144149215329648640,548,0,4294967384
 (kafka.zk.KafkaZkClient) [main]
2020-04-01 08:49:36,331 INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-ad.svc,9091,ListenerName(REPLICATION-9091),SSL), EndPoint(my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-ad.svc,9092,ListenerName(PLAIN-9092),PLAINTEXT), EndPoint(my-cluster-kafka-0.my-cluster-kafka-brokers.kafka-ad.svc,9093,ListenerName(TLS-9093),SSL), EndPoint(13.70.108.243,9094,ListenerName(EXTERNAL-9094),SASL_SSL)), czxid (broker epoch): 4294967384 (kafka.zk.KafkaZkClient) [main]
2020-04-01 08:49:36,432 INFO [ControllerEventThread controllerId=0] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) [controller-event-thread]
2020-04-01 08:49:36,489 INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-topic]
2020-04-01 08:49:36,504 INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Heartbeat]
2020-04-01 08:49:36,519 INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Rebalance]
2020-04-01 08:49:36,539 DEBUG [Controller id=0] Broker 2 has been elected as the controller, so stopping the election process. (kafka.controller.KafkaController) [controller-event-thread]
2020-04-01 08:49:36,607 INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator) [main]
2020-04-01 08:49:36,609 INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator) [main]
2020-04-01 08:49:36,636 INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 23 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:49:36,659 INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:1000,blockEndProducerId:1999) by writing to Zk with path version 2 (kafka.coordinator.transaction.ProducerIdManager) [main]
2020-04-01 08:49:36,792 INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) [main]
2020-04-01 08:49:36,794 INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) [TxnMarkerSenderThread-0]
2020-04-01 08:49:36,794 INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) [main]
2020-04-01 08:49:36,841 INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-AlterAcls]
2020-04-01 08:49:36,869 INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) [/config/changes-event-process-thread]
2020-04-01 08:49:36,895 INFO [SocketServer brokerId=0] Started data-plane processors for 4 acceptors (kafka.network.SocketServer) [main]
2020-04-01 08:49:36,897 INFO Kafka version: 2.4.0 (org.apache.kafka.common.utils.AppInfoParser) [main]
2020-04-01 08:49:36,897 INFO Kafka commitId: 77a89fcf8d7fa018 (org.apache.kafka.common.utils.AppInfoParser) [main]
2020-04-01 08:49:36,898 INFO Kafka startTimeMs: 1585730976895 (org.apache.kafka.common.utils.AppInfoParser) [main]
2020-04-01 08:49:36,899 INFO [KafkaServer id=0] started (kafka.server.KafkaServer) [main]
2020-04-01 08:49:37,465 INFO Running as server according to kafka.server:type=KafkaServer,name=BrokerState => ready (io.strimzi.kafka.agent.KafkaAgent) [KafkaAgentPoller]
2020-04-01 08:49:37,465 DEBUG Exiting thread (io.strimzi.kafka.agent.KafkaAgent) [KafkaAgentPoller]
2020-04-01 08:53:12,804 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=19, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,804 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,804 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=0, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,804 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 2, 0], zkVersion=0, replicas=[1, 2, 0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,804 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 2, 0], zkVersion=0, replicas=[1, 2, 0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,804 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=16, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 0, 1], zkVersion=0, replicas=[2, 0, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,804 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=13, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,804 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,804 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=24, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,804 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=21, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 1, 2], zkVersion=0, replicas=[0, 1, 2], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,804 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=10, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 0, 1], zkVersion=0, replicas=[2, 0, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=15, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 1, 2], zkVersion=0, replicas=[0, 1, 2], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=18, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=7, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 2, 0], zkVersion=0, replicas=[1, 2, 0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=12, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=1, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=4, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 0, 1], zkVersion=0, replicas=[2, 0, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=9, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 1, 2], zkVersion=0, replicas=[0, 1, 2], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 2, 0], zkVersion=0, replicas=[1, 2, 0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=6, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=22, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 0, 1], zkVersion=0, replicas=[2, 0, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,805 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-offsets', partitionIndex=3, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 1, 2], zkVersion=0, replicas=[0, 1, 2], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 2 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,906 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-leader transition for partition connect-cluster-offsets-24 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,909 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-leader transition for partition connect-cluster-offsets-21 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,910 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-leader transition for partition connect-cluster-offsets-12 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,910 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-leader transition for partition connect-cluster-offsets-9 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,910 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-leader transition for partition connect-cluster-offsets-18 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,910 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-leader transition for partition connect-cluster-offsets-6 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,910 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-leader transition for partition connect-cluster-offsets-15 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,910 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-leader transition for partition connect-cluster-offsets-3 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,910 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-leader transition for partition connect-cluster-offsets-0 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:12,916 INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(connect-cluster-offsets-3, connect-cluster-offsets-9, connect-cluster-offsets-15, connect-cluster-offsets-0, connect-cluster-offsets-12, connect-cluster-offsets-24, connect-cluster-offsets-6, connect-cluster-offsets-18, connect-cluster-offsets-21) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,827 INFO [Log partition=connect-cluster-offsets-24, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,845 INFO [Log partition=connect-cluster-offsets-24, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 684 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,849 INFO Created log for partition connect-cluster-offsets-24 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-24 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,851 INFO [Partition connect-cluster-offsets-24 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-24 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,853 INFO [Partition connect-cluster-offsets-24 broker=0] Log loaded for partition connect-cluster-offsets-24 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,855 INFO [Partition connect-cluster-offsets-24 broker=0] connect-cluster-offsets-24 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,912 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-24 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,956 INFO [Log partition=connect-cluster-offsets-21, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,957 INFO [Log partition=connect-cluster-offsets-21, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 19 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,960 INFO Created log for partition connect-cluster-offsets-21 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-21 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,960 INFO [Partition connect-cluster-offsets-21 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-21 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,960 INFO [Partition connect-cluster-offsets-21 broker=0] Log loaded for partition connect-cluster-offsets-21 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,960 INFO [Partition connect-cluster-offsets-21 broker=0] connect-cluster-offsets-21 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,972 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-21 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,981 INFO [Log partition=connect-cluster-offsets-12, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,982 INFO [Log partition=connect-cluster-offsets-12, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,984 INFO Created log for partition connect-cluster-offsets-12 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-12 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,984 INFO [Partition connect-cluster-offsets-12 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-12 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,984 INFO [Partition connect-cluster-offsets-12 broker=0] Log loaded for partition connect-cluster-offsets-12 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,984 INFO [Partition connect-cluster-offsets-12 broker=0] connect-cluster-offsets-12 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:13,995 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-12 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,005 INFO [Log partition=connect-cluster-offsets-9, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,007 INFO [Log partition=connect-cluster-offsets-9, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,008 INFO Created log for partition connect-cluster-offsets-9 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-9 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,008 INFO [Partition connect-cluster-offsets-9 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-9 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,008 INFO [Partition connect-cluster-offsets-9 broker=0] Log loaded for partition connect-cluster-offsets-9 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,008 INFO [Partition connect-cluster-offsets-9 broker=0] connect-cluster-offsets-9 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,026 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-9 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,040 INFO [Log partition=connect-cluster-offsets-18, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,041 INFO [Log partition=connect-cluster-offsets-18, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,042 INFO Created log for partition connect-cluster-offsets-18 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-18 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,042 INFO [Partition connect-cluster-offsets-18 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-18 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,042 INFO [Partition connect-cluster-offsets-18 broker=0] Log loaded for partition connect-cluster-offsets-18 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,042 INFO [Partition connect-cluster-offsets-18 broker=0] connect-cluster-offsets-18 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,067 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-18 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,089 INFO [Log partition=connect-cluster-offsets-6, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,091 INFO [Log partition=connect-cluster-offsets-6, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 9 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,099 INFO Created log for partition connect-cluster-offsets-6 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-6 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,099 INFO [Partition connect-cluster-offsets-6 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-6 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,099 INFO [Partition connect-cluster-offsets-6 broker=0] Log loaded for partition connect-cluster-offsets-6 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,101 INFO [Partition connect-cluster-offsets-6 broker=0] connect-cluster-offsets-6 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,120 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-6 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,143 INFO [Log partition=connect-cluster-offsets-15, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,145 INFO [Log partition=connect-cluster-offsets-15, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,154 INFO Created log for partition connect-cluster-offsets-15 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-15 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,154 INFO [Partition connect-cluster-offsets-15 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-15 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,154 INFO [Partition connect-cluster-offsets-15 broker=0] Log loaded for partition connect-cluster-offsets-15 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,154 INFO [Partition connect-cluster-offsets-15 broker=0] connect-cluster-offsets-15 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,164 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-15 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,174 INFO [Log partition=connect-cluster-offsets-3, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,175 INFO [Log partition=connect-cluster-offsets-3, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,179 INFO Created log for partition connect-cluster-offsets-3 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-3 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,179 INFO [Partition connect-cluster-offsets-3 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-3 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,179 INFO [Partition connect-cluster-offsets-3 broker=0] Log loaded for partition connect-cluster-offsets-3 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,179 INFO [Partition connect-cluster-offsets-3 broker=0] connect-cluster-offsets-3 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,188 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-3 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,198 INFO [Log partition=connect-cluster-offsets-0, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,198 INFO [Log partition=connect-cluster-offsets-0, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,199 INFO Created log for partition connect-cluster-offsets-0 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,199 INFO [Partition connect-cluster-offsets-0 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,200 INFO [Partition connect-cluster-offsets-0 broker=0] Log loaded for partition connect-cluster-offsets-0 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,200 INFO [Partition connect-cluster-offsets-0 broker=0] connect-cluster-offsets-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,234 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-0 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,237 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-leader transition for partition connect-cluster-offsets-24 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,237 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-leader transition for partition connect-cluster-offsets-21 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,237 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-leader transition for partition connect-cluster-offsets-12 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,239 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-leader transition for partition connect-cluster-offsets-9 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,239 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-leader transition for partition connect-cluster-offsets-18 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,239 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-leader transition for partition connect-cluster-offsets-6 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,239 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-leader transition for partition connect-cluster-offsets-15 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,239 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-leader transition for partition connect-cluster-offsets-3 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,239 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-leader transition for partition connect-cluster-offsets-0 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,244 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-16 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,244 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-13 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,244 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-10 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,250 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-7 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,250 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-4 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,250 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-23 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,250 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-20 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,250 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,250 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-17 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,250 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-14 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,250 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-11 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,250 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-8 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,250 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-5 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,250 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-2 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,250 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-22 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,250 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 2 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-offsets-19 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,277 INFO [Log partition=connect-cluster-offsets-16, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,279 INFO [Log partition=connect-cluster-offsets-16, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 9 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,281 INFO Created log for partition connect-cluster-offsets-16 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-16 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,281 INFO [Partition connect-cluster-offsets-16 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-16 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,281 INFO [Partition connect-cluster-offsets-16 broker=0] Log loaded for partition connect-cluster-offsets-16 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,290 INFO [Log partition=connect-cluster-offsets-13, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,291 INFO [Log partition=connect-cluster-offsets-13, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,305 INFO Created log for partition connect-cluster-offsets-13 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-13 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,306 INFO [Partition connect-cluster-offsets-13 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-13 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,306 INFO [Partition connect-cluster-offsets-13 broker=0] Log loaded for partition connect-cluster-offsets-13 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,323 INFO [Log partition=connect-cluster-offsets-10, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,324 INFO [Log partition=connect-cluster-offsets-10, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,326 INFO Created log for partition connect-cluster-offsets-10 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-10 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,326 INFO [Partition connect-cluster-offsets-10 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-10 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,326 INFO [Partition connect-cluster-offsets-10 broker=0] Log loaded for partition connect-cluster-offsets-10 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,354 INFO [Log partition=connect-cluster-offsets-7, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,355 INFO [Log partition=connect-cluster-offsets-7, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 21 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,360 INFO Created log for partition connect-cluster-offsets-7 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-7 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,360 INFO [Partition connect-cluster-offsets-7 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-7 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,360 INFO [Partition connect-cluster-offsets-7 broker=0] Log loaded for partition connect-cluster-offsets-7 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,370 INFO [Log partition=connect-cluster-offsets-4, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,382 INFO [Log partition=connect-cluster-offsets-4, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 14 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,385 INFO Created log for partition connect-cluster-offsets-4 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-4 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,385 INFO [Partition connect-cluster-offsets-4 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-4 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,385 INFO [Partition connect-cluster-offsets-4 broker=0] Log loaded for partition connect-cluster-offsets-4 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,399 INFO [Log partition=connect-cluster-offsets-23, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,400 INFO [Log partition=connect-cluster-offsets-23, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,404 INFO Created log for partition connect-cluster-offsets-23 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-23 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,405 INFO [Partition connect-cluster-offsets-23 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-23 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,405 INFO [Partition connect-cluster-offsets-23 broker=0] Log loaded for partition connect-cluster-offsets-23 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,432 INFO [Log partition=connect-cluster-offsets-20, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,433 INFO [Log partition=connect-cluster-offsets-20, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,436 INFO Created log for partition connect-cluster-offsets-20 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-20 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,436 INFO [Partition connect-cluster-offsets-20 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-20 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,436 INFO [Partition connect-cluster-offsets-20 broker=0] Log loaded for partition connect-cluster-offsets-20 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,467 INFO [Log partition=connect-cluster-offsets-1, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,474 INFO [Log partition=connect-cluster-offsets-1, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 9 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,477 INFO Created log for partition connect-cluster-offsets-1 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-1 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,478 INFO [Partition connect-cluster-offsets-1 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,478 INFO [Partition connect-cluster-offsets-1 broker=0] Log loaded for partition connect-cluster-offsets-1 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,504 INFO [Log partition=connect-cluster-offsets-17, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,506 INFO [Log partition=connect-cluster-offsets-17, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,512 INFO Created log for partition connect-cluster-offsets-17 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-17 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,512 INFO [Partition connect-cluster-offsets-17 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-17 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,512 INFO [Partition connect-cluster-offsets-17 broker=0] Log loaded for partition connect-cluster-offsets-17 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,523 INFO [Log partition=connect-cluster-offsets-14, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,523 INFO [Log partition=connect-cluster-offsets-14, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,525 INFO Created log for partition connect-cluster-offsets-14 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-14 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,525 INFO [Partition connect-cluster-offsets-14 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-14 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,525 INFO [Partition connect-cluster-offsets-14 broker=0] Log loaded for partition connect-cluster-offsets-14 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,533 INFO [Log partition=connect-cluster-offsets-11, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,534 INFO [Log partition=connect-cluster-offsets-11, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,542 INFO Created log for partition connect-cluster-offsets-11 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-11 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,542 INFO [Partition connect-cluster-offsets-11 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-11 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,543 INFO [Partition connect-cluster-offsets-11 broker=0] Log loaded for partition connect-cluster-offsets-11 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,565 INFO [Log partition=connect-cluster-offsets-8, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,569 INFO [Log partition=connect-cluster-offsets-8, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 12 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,574 INFO Created log for partition connect-cluster-offsets-8 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-8 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,578 INFO [Partition connect-cluster-offsets-8 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-8 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,580 INFO [Partition connect-cluster-offsets-8 broker=0] Log loaded for partition connect-cluster-offsets-8 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,603 INFO [Log partition=connect-cluster-offsets-5, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,604 INFO [Log partition=connect-cluster-offsets-5, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,605 INFO Created log for partition connect-cluster-offsets-5 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-5 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,606 INFO [Partition connect-cluster-offsets-5 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-5 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,606 INFO [Partition connect-cluster-offsets-5 broker=0] Log loaded for partition connect-cluster-offsets-5 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,658 INFO [Log partition=connect-cluster-offsets-2, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,659 INFO [Log partition=connect-cluster-offsets-2, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,662 INFO Created log for partition connect-cluster-offsets-2 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-2 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,662 INFO [Partition connect-cluster-offsets-2 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-2 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,663 INFO [Partition connect-cluster-offsets-2 broker=0] Log loaded for partition connect-cluster-offsets-2 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,676 INFO [Log partition=connect-cluster-offsets-22, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,680 INFO [Log partition=connect-cluster-offsets-22, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,685 INFO Created log for partition connect-cluster-offsets-22 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-22 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,685 INFO [Partition connect-cluster-offsets-22 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-22 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,686 INFO [Partition connect-cluster-offsets-22 broker=0] Log loaded for partition connect-cluster-offsets-22 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,697 INFO [Log partition=connect-cluster-offsets-19, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,705 INFO [Log partition=connect-cluster-offsets-19, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 11 ms (kafka.log.Log) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,706 INFO Created log for partition connect-cluster-offsets-19 in /var/lib/kafka/data/kafka-log0/connect-cluster-offsets-19 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,706 INFO [Partition connect-cluster-offsets-19 broker=0] No checkpointed highwatermark is found for partition connect-cluster-offsets-19 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,707 INFO [Partition connect-cluster-offsets-19 broker=0] Log loaded for partition connect-cluster-offsets-19 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,709 INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(connect-cluster-offsets-14, connect-cluster-offsets-22, connect-cluster-offsets-7, connect-cluster-offsets-11, connect-cluster-offsets-19, connect-cluster-offsets-8, connect-cluster-offsets-4, connect-cluster-offsets-1, connect-cluster-offsets-23, connect-cluster-offsets-16, connect-cluster-offsets-5, connect-cluster-offsets-20, connect-cluster-offsets-13, connect-cluster-offsets-2, connect-cluster-offsets-17, connect-cluster-offsets-10) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,711 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-4 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-7 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-10 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-13 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-16 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-19 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-22 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-2 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-5 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-8 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-11 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-14 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-17 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-20 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,712 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-23 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,718 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-1 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,718 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-4 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,718 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-7 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,719 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-10 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,719 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-13 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,719 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-16 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,719 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-19 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,719 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-22 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,719 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-2 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,719 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-5 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,719 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-8 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,719 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-11 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,719 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-14 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,719 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-17 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,719 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-20 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,719 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-offsets-23 as part of become-follower request with correlation id 2 from controller 2 epoch 1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:14,983 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:14,986 INFO [ReplicaFetcherManager on broker 0] Added fetcher to broker BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) for partitions Map(connect-cluster-offsets-19 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-13 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-16 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-22 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-1 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-4 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-7 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-10 -> (offset=0, leaderEpoch=0)) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,013 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition connect-cluster-offsets-22 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,066 INFO [Log partition=connect-cluster-offsets-22, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,085 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition connect-cluster-offsets-7 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,085 INFO [Log partition=connect-cluster-offsets-7, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,085 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition connect-cluster-offsets-13 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,085 INFO [Log partition=connect-cluster-offsets-13, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,085 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition connect-cluster-offsets-19 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,085 INFO [Log partition=connect-cluster-offsets-19, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,086 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition connect-cluster-offsets-4 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,086 INFO [Log partition=connect-cluster-offsets-4, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,086 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition connect-cluster-offsets-1 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,086 INFO [Log partition=connect-cluster-offsets-1, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,086 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition connect-cluster-offsets-10 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,086 INFO [Log partition=connect-cluster-offsets-10, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,086 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition connect-cluster-offsets-16 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,086 INFO [Log partition=connect-cluster-offsets-16, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:15,272 INFO [ReplicaFetcherManager on broker 0] Added fetcher to broker BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) for partitions Map(connect-cluster-offsets-8 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-2 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-20 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-23 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-14 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-5 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-17 -> (offset=0, leaderEpoch=0), connect-cluster-offsets-11 -> (offset=0, leaderEpoch=0)) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,274 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,276 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Truncating partition connect-cluster-offsets-14 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,276 INFO [Log partition=connect-cluster-offsets-14, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,276 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Truncating partition connect-cluster-offsets-20 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,277 INFO [Log partition=connect-cluster-offsets-20, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,277 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Truncating partition connect-cluster-offsets-2 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,277 INFO [Log partition=connect-cluster-offsets-2, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,277 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Truncating partition connect-cluster-offsets-11 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,277 INFO [Log partition=connect-cluster-offsets-11, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,278 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Truncating partition connect-cluster-offsets-8 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,278 INFO [Log partition=connect-cluster-offsets-8, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,278 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Truncating partition connect-cluster-offsets-17 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,278 INFO [Log partition=connect-cluster-offsets-17, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,278 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Truncating partition connect-cluster-offsets-23 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,278 INFO [Log partition=connect-cluster-offsets-23, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,278 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Truncating partition connect-cluster-offsets-5 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,279 INFO [Log partition=connect-cluster-offsets-5, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:15,290 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-8 with leader BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,294 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-2 with leader BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-20 with leader BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-19 with leader BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-13 with leader BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-23 with leader BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-14 with leader BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-5 with leader BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-16 with leader BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-22 with leader BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-17 with leader BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-1 with leader BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-11 with leader BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-4 with leader BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-7 with leader BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,295 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 2 for partition connect-cluster-offsets-10 with leader BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,334 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-16 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-13 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-10 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-7 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-4 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-23 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-20 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-17 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-14 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-11 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-8 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-5 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-2 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-22 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,335 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 2 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-offsets-19 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:15,387 ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition connect-cluster-offsets-19 at offset 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
2020-04-01 08:53:15,429 ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition connect-cluster-offsets-13 at offset 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
2020-04-01 08:53:15,432 ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition connect-cluster-offsets-16 at offset 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
2020-04-01 08:53:15,432 ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition connect-cluster-offsets-22 at offset 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
2020-04-01 08:53:15,432 ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition connect-cluster-offsets-1 at offset 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
2020-04-01 08:53:15,432 ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition connect-cluster-offsets-4 at offset 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
2020-04-01 08:53:15,533 ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition connect-cluster-offsets-7 at offset 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
2020-04-01 08:53:15,533 ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition connect-cluster-offsets-10 at offset 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
2020-04-01 08:53:15,572 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=19, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], offlineReplicas=[]) for partition connect-cluster-offsets-19 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,572 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], offlineReplicas=[]) for partition connect-cluster-offsets-8 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,572 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=0, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], offlineReplicas=[]) for partition connect-cluster-offsets-0 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,575 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 2, 0], zkVersion=0, replicas=[1, 2, 0], offlineReplicas=[]) for partition connect-cluster-offsets-11 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 2, 0], zkVersion=0, replicas=[1, 2, 0], offlineReplicas=[]) for partition connect-cluster-offsets-5 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=16, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 0, 1], zkVersion=0, replicas=[2, 0, 1], offlineReplicas=[]) for partition connect-cluster-offsets-16 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=13, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], offlineReplicas=[]) for partition connect-cluster-offsets-13 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], offlineReplicas=[]) for partition connect-cluster-offsets-2 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=24, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], offlineReplicas=[]) for partition connect-cluster-offsets-24 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=21, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 1, 2], zkVersion=0, replicas=[0, 1, 2], offlineReplicas=[]) for partition connect-cluster-offsets-21 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=10, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 0, 1], zkVersion=0, replicas=[2, 0, 1], offlineReplicas=[]) for partition connect-cluster-offsets-10 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=15, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 1, 2], zkVersion=0, replicas=[0, 1, 2], offlineReplicas=[]) for partition connect-cluster-offsets-15 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=18, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], offlineReplicas=[]) for partition connect-cluster-offsets-18 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=7, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], offlineReplicas=[]) for partition connect-cluster-offsets-7 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 2, 0], zkVersion=0, replicas=[1, 2, 0], offlineReplicas=[]) for partition connect-cluster-offsets-23 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=12, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], offlineReplicas=[]) for partition connect-cluster-offsets-12 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=1, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], offlineReplicas=[]) for partition connect-cluster-offsets-1 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=4, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 0, 1], zkVersion=0, replicas=[2, 0, 1], offlineReplicas=[]) for partition connect-cluster-offsets-4 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=9, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 1, 2], zkVersion=0, replicas=[0, 1, 2], offlineReplicas=[]) for partition connect-cluster-offsets-9 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], offlineReplicas=[]) for partition connect-cluster-offsets-20 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 2, 0], zkVersion=0, replicas=[1, 2, 0], offlineReplicas=[]) for partition connect-cluster-offsets-17 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=6, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], offlineReplicas=[]) for partition connect-cluster-offsets-6 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,576 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=22, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 0, 1], zkVersion=0, replicas=[2, 0, 1], offlineReplicas=[]) for partition connect-cluster-offsets-22 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,578 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], offlineReplicas=[]) for partition connect-cluster-offsets-14 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:15,578 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-offsets', partitionIndex=3, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 1, 2], zkVersion=0, replicas=[0, 1, 2], offlineReplicas=[]) for partition connect-cluster-offsets-3 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 3 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:16,940 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-status', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 2, 0], zkVersion=0, replicas=[1, 2, 0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 4 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,940 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-status', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 4 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,940 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-status', partitionIndex=3, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 0, 1], zkVersion=0, replicas=[2, 0, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 4 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,940 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-status', partitionIndex=0, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 4 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,940 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-status', partitionIndex=2, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 4 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,949 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 4 from controller 2 epoch 1 starting the become-leader transition for partition connect-cluster-status-2 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,951 INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(connect-cluster-status-2) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,983 INFO [Log partition=connect-cluster-status-2, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,984 INFO [Log partition=connect-cluster-status-2, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 25 ms (kafka.log.Log) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,985 INFO Created log for partition connect-cluster-status-2 in /var/lib/kafka/data/kafka-log0/connect-cluster-status-2 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,988 INFO [Partition connect-cluster-status-2 broker=0] No checkpointed highwatermark is found for partition connect-cluster-status-2 (kafka.cluster.Partition) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,988 INFO [Partition connect-cluster-status-2 broker=0] Log loaded for partition connect-cluster-status-2 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,988 INFO [Partition connect-cluster-status-2 broker=0] connect-cluster-status-2 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,997 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 4 for partition connect-cluster-status-2 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,997 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 4 from controller 2 epoch 1 for the become-leader transition for partition connect-cluster-status-2 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,998 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 4 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-status-3 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,998 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 4 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-status-0 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,998 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 4 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-status-1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:16,998 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 4 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-status-4 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,008 INFO [Log partition=connect-cluster-status-3, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,008 INFO [Log partition=connect-cluster-status-3, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,009 INFO Created log for partition connect-cluster-status-3 in /var/lib/kafka/data/kafka-log0/connect-cluster-status-3 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,009 INFO [Partition connect-cluster-status-3 broker=0] No checkpointed highwatermark is found for partition connect-cluster-status-3 (kafka.cluster.Partition) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,010 INFO [Partition connect-cluster-status-3 broker=0] Log loaded for partition connect-cluster-status-3 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,016 INFO [Log partition=connect-cluster-status-0, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,017 INFO [Log partition=connect-cluster-status-0, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,018 INFO Created log for partition connect-cluster-status-0 in /var/lib/kafka/data/kafka-log0/connect-cluster-status-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,026 INFO [Partition connect-cluster-status-0 broker=0] No checkpointed highwatermark is found for partition connect-cluster-status-0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,026 INFO [Partition connect-cluster-status-0 broker=0] Log loaded for partition connect-cluster-status-0 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,037 INFO [Log partition=connect-cluster-status-1, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,038 INFO [Log partition=connect-cluster-status-1, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,039 INFO Created log for partition connect-cluster-status-1 in /var/lib/kafka/data/kafka-log0/connect-cluster-status-1 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,039 INFO [Partition connect-cluster-status-1 broker=0] No checkpointed highwatermark is found for partition connect-cluster-status-1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,039 INFO [Partition connect-cluster-status-1 broker=0] Log loaded for partition connect-cluster-status-1 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,058 INFO [Log partition=connect-cluster-status-4, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,059 INFO [Log partition=connect-cluster-status-4, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,067 INFO Created log for partition connect-cluster-status-4 in /var/lib/kafka/data/kafka-log0/connect-cluster-status-4 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,067 INFO [Partition connect-cluster-status-4 broker=0] No checkpointed highwatermark is found for partition connect-cluster-status-4 (kafka.cluster.Partition) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,068 INFO [Partition connect-cluster-status-4 broker=0] Log loaded for partition connect-cluster-status-4 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,068 INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(connect-cluster-status-0, connect-cluster-status-4, connect-cluster-status-3, connect-cluster-status-1) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,068 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 4 for partition connect-cluster-status-4 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,068 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 4 for partition connect-cluster-status-0 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,068 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 4 for partition connect-cluster-status-3 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,068 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 4 for partition connect-cluster-status-1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,077 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-status-4 as part of become-follower request with correlation id 4 from controller 2 epoch 1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,078 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-status-0 as part of become-follower request with correlation id 4 from controller 2 epoch 1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,078 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-status-3 as part of become-follower request with correlation id 4 from controller 2 epoch 1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,078 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-status-1 as part of become-follower request with correlation id 4 from controller 2 epoch 1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,084 INFO [ReplicaFetcherManager on broker 0] Added fetcher to broker BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) for partitions Map(connect-cluster-status-3 -> (offset=0, leaderEpoch=0), connect-cluster-status-0 -> (offset=0, leaderEpoch=0)) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,084 INFO [ReplicaFetcherManager on broker 0] Added fetcher to broker BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) for partitions Map(connect-cluster-status-4 -> (offset=0, leaderEpoch=0), connect-cluster-status-1 -> (offset=0, leaderEpoch=0)) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,084 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 4 for partition connect-cluster-status-4 with leader BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,084 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 4 for partition connect-cluster-status-1 with leader BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,084 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 4 for partition connect-cluster-status-3 with leader BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,084 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 4 for partition connect-cluster-status-0 with leader BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,089 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 4 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-status-3 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,089 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 4 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-status-0 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,089 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 4 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-status-1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,089 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 4 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-status-4 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-3]
2020-04-01 08:53:17,094 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Truncating partition connect-cluster-status-4 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:17,095 INFO [Log partition=connect-cluster-status-4, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:17,096 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Truncating partition connect-cluster-status-1 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:17,097 INFO [Log partition=connect-cluster-status-1, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:17,127 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-status', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 2, 0], zkVersion=0, replicas=[1, 2, 0], offlineReplicas=[]) for partition connect-cluster-status-4 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 5 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:17,127 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-status', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], offlineReplicas=[]) for partition connect-cluster-status-1 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 5 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:17,127 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-status', partitionIndex=3, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 0, 1], zkVersion=0, replicas=[2, 0, 1], offlineReplicas=[]) for partition connect-cluster-status-3 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 5 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:17,127 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-status', partitionIndex=0, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], offlineReplicas=[]) for partition connect-cluster-status-0 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 5 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:17,127 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-status', partitionIndex=2, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], offlineReplicas=[]) for partition connect-cluster-status-2 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 5 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:17,165 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition connect-cluster-status-0 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:17,165 INFO [Log partition=connect-cluster-status-0, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:17,165 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition connect-cluster-status-3 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:17,165 INFO [Log partition=connect-cluster-status-3, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:18,061 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='connect-cluster-configs', partitionIndex=0, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 0, 1], zkVersion=0, replicas=[2, 0, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 6 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-4]
2020-04-01 08:53:18,064 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 6 from controller 2 epoch 1 starting the become-follower transition for partition connect-cluster-configs-0 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-4]
2020-04-01 08:53:18,078 INFO [Log partition=connect-cluster-configs-0, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-4]
2020-04-01 08:53:18,079 INFO [Log partition=connect-cluster-configs-0, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log) [data-plane-kafka-request-handler-4]
2020-04-01 08:53:18,080 INFO Created log for partition connect-cluster-configs-0 in /var/lib/kafka/data/kafka-log0/connect-cluster-configs-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-4]
2020-04-01 08:53:18,115 INFO [Partition connect-cluster-configs-0 broker=0] No checkpointed highwatermark is found for partition connect-cluster-configs-0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-4]
2020-04-01 08:53:18,118 INFO [Partition connect-cluster-configs-0 broker=0] Log loaded for partition connect-cluster-configs-0 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-4]
2020-04-01 08:53:18,124 INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(connect-cluster-configs-0) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-4]
2020-04-01 08:53:18,124 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 6 for partition connect-cluster-configs-0 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-4]
2020-04-01 08:53:18,125 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition connect-cluster-configs-0 as part of become-follower request with correlation id 6 from controller 2 epoch 1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-4]
2020-04-01 08:53:18,128 INFO [ReplicaFetcherManager on broker 0] Added fetcher to broker BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) for partitions Map(connect-cluster-configs-0 -> (offset=0, leaderEpoch=0)) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-4]
2020-04-01 08:53:18,129 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 6 for partition connect-cluster-configs-0 with leader BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-4]
2020-04-01 08:53:18,129 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 6 from controller 2 epoch 1 for the become-follower transition for partition connect-cluster-configs-0 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-4]
2020-04-01 08:53:18,149 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='connect-cluster-configs', partitionIndex=0, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 0, 1], zkVersion=0, replicas=[2, 0, 1], offlineReplicas=[]) for partition connect-cluster-configs-0 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 7 (state.change.logger) [data-plane-kafka-request-handler-2]
2020-04-01 08:53:18,208 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition connect-cluster-configs-0 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:18,209 INFO [Log partition=connect-cluster-configs-0, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:19,179 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,179 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,179 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,179 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,180 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,180 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,180 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,180 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,180 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,180 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,180 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,180 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,180 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,180 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,180 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,180 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 8 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,218 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 8 from controller 2 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,219 INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(__consumer_offsets-8, __consumer_offsets-35, __consumer_offsets-41, __consumer_offsets-23, __consumer_offsets-47, __consumer_offsets-38, __consumer_offsets-17, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-44, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-32) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,229 INFO [Log partition=__consumer_offsets-29, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,232 INFO [Log partition=__consumer_offsets-29, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,236 INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-29 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,238 INFO [Partition __consumer_offsets-29 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,238 INFO [Partition __consumer_offsets-29 broker=0] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,238 INFO [Partition __consumer_offsets-29 broker=0] __consumer_offsets-29 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,248 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-29 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,276 INFO [Log partition=__consumer_offsets-26, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,277 INFO [Log partition=__consumer_offsets-26, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 13 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,279 INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-26 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,280 INFO [Partition __consumer_offsets-26 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,281 INFO [Partition __consumer_offsets-26 broker=0] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,281 INFO [Partition __consumer_offsets-26 broker=0] __consumer_offsets-26 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,289 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-26 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,307 INFO [Log partition=__consumer_offsets-23, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,308 INFO [Log partition=__consumer_offsets-23, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 9 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,309 INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-23 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,309 INFO [Partition __consumer_offsets-23 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,309 INFO [Partition __consumer_offsets-23 broker=0] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,310 INFO [Partition __consumer_offsets-23 broker=0] __consumer_offsets-23 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,329 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-23 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,339 INFO [Log partition=__consumer_offsets-20, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,341 INFO [Log partition=__consumer_offsets-20, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,343 INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-20 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,343 INFO [Partition __consumer_offsets-20 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,343 INFO [Partition __consumer_offsets-20 broker=0] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,343 INFO [Partition __consumer_offsets-20 broker=0] __consumer_offsets-20 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,355 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-20 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,379 INFO [Log partition=__consumer_offsets-17, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,380 INFO [Log partition=__consumer_offsets-17, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,381 INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-17 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,381 INFO [Partition __consumer_offsets-17 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,381 INFO [Partition __consumer_offsets-17 broker=0] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,381 INFO [Partition __consumer_offsets-17 broker=0] __consumer_offsets-17 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,398 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-17 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,413 INFO [Log partition=__consumer_offsets-14, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,413 INFO [Log partition=__consumer_offsets-14, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,415 INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-14 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,415 INFO [Partition __consumer_offsets-14 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,415 INFO [Partition __consumer_offsets-14 broker=0] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,415 INFO [Partition __consumer_offsets-14 broker=0] __consumer_offsets-14 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,426 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-14 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,437 INFO [Log partition=__consumer_offsets-11, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,438 INFO [Log partition=__consumer_offsets-11, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,440 INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-11 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,440 INFO [Partition __consumer_offsets-11 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,440 INFO [Partition __consumer_offsets-11 broker=0] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,440 INFO [Partition __consumer_offsets-11 broker=0] __consumer_offsets-11 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,452 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-11 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,500 INFO [Log partition=__consumer_offsets-8, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,511 INFO [Log partition=__consumer_offsets-8, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 15 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,518 INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-8 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,518 INFO [Partition __consumer_offsets-8 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,518 INFO [Partition __consumer_offsets-8 broker=0] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,518 INFO [Partition __consumer_offsets-8 broker=0] __consumer_offsets-8 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,543 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-8 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,555 INFO [Log partition=__consumer_offsets-5, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,566 INFO [Log partition=__consumer_offsets-5, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 14 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,567 INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-5 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,567 INFO [Partition __consumer_offsets-5 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,567 INFO [Partition __consumer_offsets-5 broker=0] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,567 INFO [Partition __consumer_offsets-5 broker=0] __consumer_offsets-5 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,584 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-5 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,599 INFO [Log partition=__consumer_offsets-2, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,601 INFO [Log partition=__consumer_offsets-2, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,605 INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-2 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,605 INFO [Partition __consumer_offsets-2 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,605 INFO [Partition __consumer_offsets-2 broker=0] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,605 INFO [Partition __consumer_offsets-2 broker=0] __consumer_offsets-2 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,614 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-2 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,701 INFO [Log partition=__consumer_offsets-47, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,702 INFO [Log partition=__consumer_offsets-47, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 13 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,703 INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-47 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,703 INFO [Partition __consumer_offsets-47 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,705 INFO [Partition __consumer_offsets-47 broker=0] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,706 INFO [Partition __consumer_offsets-47 broker=0] __consumer_offsets-47 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,733 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-47 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,759 INFO [Log partition=__consumer_offsets-38, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,760 INFO [Log partition=__consumer_offsets-38, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,761 INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-38 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,761 INFO [Partition __consumer_offsets-38 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,761 INFO [Partition __consumer_offsets-38 broker=0] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,761 INFO [Partition __consumer_offsets-38 broker=0] __consumer_offsets-38 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,773 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-38 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,834 INFO [Log partition=__consumer_offsets-35, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,835 INFO [Log partition=__consumer_offsets-35, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 56 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,847 INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-35 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,847 INFO [Partition __consumer_offsets-35 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,847 INFO [Partition __consumer_offsets-35 broker=0] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,848 INFO [Partition __consumer_offsets-35 broker=0] __consumer_offsets-35 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,876 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-35 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,890 INFO [Log partition=__consumer_offsets-44, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,891 INFO [Log partition=__consumer_offsets-44, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 10 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,895 INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-44 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,895 INFO [Partition __consumer_offsets-44 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,895 INFO [Partition __consumer_offsets-44 broker=0] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,895 INFO [Partition __consumer_offsets-44 broker=0] __consumer_offsets-44 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,911 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-44 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,920 INFO [Log partition=__consumer_offsets-32, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,921 INFO [Log partition=__consumer_offsets-32, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,930 INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-32 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,930 INFO [Partition __consumer_offsets-32 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,930 INFO [Partition __consumer_offsets-32 broker=0] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,930 INFO [Partition __consumer_offsets-32 broker=0] __consumer_offsets-32 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,945 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-32 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,967 INFO [Log partition=__consumer_offsets-41, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,972 INFO [Log partition=__consumer_offsets-41, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 13 ms (kafka.log.Log) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,977 INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/kafka-log0/__consumer_offsets-41 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,977 INFO [Partition __consumer_offsets-41 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,977 INFO [Partition __consumer_offsets-41 broker=0] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,977 INFO [Partition __consumer_offsets-41 broker=0] __consumer_offsets-41 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,994 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 8 for partition __consumer_offsets-41 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,994 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,994 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,994 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,994 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,994 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,994 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,994 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,994 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,994 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,994 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,994 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,994 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,995 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,995 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,995 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,995 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 8 from controller 2 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:19,998 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,002 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,002 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,002 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,002 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-14 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,002 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,002 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,002 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-23 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,003 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,003 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,003 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-32 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,003 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,003 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,003 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,003 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,003 INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager) [data-plane-kafka-request-handler-7]
2020-04-01 08:53:20,016 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,017 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,018 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2], zkVersion=0, replicas=[2], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,019 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,019 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,019 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 9 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:20,023 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-2 in 19 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,024 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,025 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-8 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,025 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,025 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,026 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,026 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,027 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-23 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,027 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,027 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,028 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-32 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,028 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,029 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,029 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,029 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:20,031 INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-47 in 2 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 08:53:34,849 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=1, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 10 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,849 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=0, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 10 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,849 TRACE [Broker id=0] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='test-topic', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 10 from controller 2 epoch 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,860 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 10 from controller 2 epoch 1 starting the become-leader transition for partition test-topic-0 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,861 INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(test-topic-0) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,875 INFO [Log partition=test-topic-0, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,875 INFO [Log partition=test-topic-0, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,877 INFO Created log for partition test-topic-0 in /var/lib/kafka/data/kafka-log0/test-topic-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,879 INFO [Partition test-topic-0 broker=0] No checkpointed highwatermark is found for partition test-topic-0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,885 INFO [Partition test-topic-0 broker=0] Log loaded for partition test-topic-0 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,886 INFO [Partition test-topic-0 broker=0] test-topic-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,904 TRACE [Broker id=0] Stopped fetchers as part of become-leader request from controller 2 epoch 1 with correlation id 10 for partition test-topic-0 (last update controller epoch 1) (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,904 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 10 from controller 2 epoch 1 for the become-leader transition for partition test-topic-0 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,904 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 10 from controller 2 epoch 1 starting the become-follower transition for partition test-topic-2 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,904 TRACE [Broker id=0] Handling LeaderAndIsr request correlationId 10 from controller 2 epoch 1 starting the become-follower transition for partition test-topic-1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,915 INFO [Log partition=test-topic-2, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,916 INFO [Log partition=test-topic-2, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,917 INFO Created log for partition test-topic-2 in /var/lib/kafka/data/kafka-log0/test-topic-2 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,917 INFO [Partition test-topic-2 broker=0] No checkpointed highwatermark is found for partition test-topic-2 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,917 INFO [Partition test-topic-2 broker=0] Log loaded for partition test-topic-2 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,934 INFO [Log partition=test-topic-1, dir=/var/lib/kafka/data/kafka-log0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,935 INFO [Log partition=test-topic-1, dir=/var/lib/kafka/data/kafka-log0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 15 ms (kafka.log.Log) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,936 INFO Created log for partition test-topic-1 in /var/lib/kafka/data/kafka-log0/test-topic-1 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,936 INFO [Partition test-topic-1 broker=0] No checkpointed highwatermark is found for partition test-topic-1 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,936 INFO [Partition test-topic-1 broker=0] Log loaded for partition test-topic-1 with initial high watermark 0 (kafka.cluster.Partition) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,936 INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(test-topic-2, test-topic-1) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,936 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 10 for partition test-topic-2 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,936 TRACE [Broker id=0] Stopped fetchers as part of become-follower request from controller 2 epoch 1 with correlation id 10 for partition test-topic-1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,936 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition test-topic-2 as part of become-follower request with correlation id 10 from controller 2 epoch 1 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,936 TRACE [Broker id=0] Truncated logs and checkpointed recovery boundaries for partition test-topic-1 as part of become-follower request with correlation id 10 from controller 2 epoch 1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,941 INFO [ReplicaFetcherManager on broker 0] Added fetcher to broker BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) for partitions Map(test-topic-1 -> (offset=0, leaderEpoch=0)) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,941 INFO [ReplicaFetcherManager on broker 0] Added fetcher to broker BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) for partitions Map(test-topic-2 -> (offset=0, leaderEpoch=0)) (kafka.server.ReplicaFetcherManager) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,941 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 10 for partition test-topic-1 with leader BrokerEndPoint(id=2, host=my-cluster-kafka-2.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,941 TRACE [Broker id=0] Started fetcher to new leader as part of become-follower request from controller 2 epoch 1 with correlation id 10 for partition test-topic-2 with leader BrokerEndPoint(id=1, host=my-cluster-kafka-1.my-cluster-kafka-brokers.kafka-ad.svc:9091) (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,941 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 10 from controller 2 epoch 1 for the become-follower transition for partition test-topic-2 with leader 1 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,941 TRACE [Broker id=0] Completed LeaderAndIsr request correlationId 10 from controller 2 epoch 1 for the become-follower transition for partition test-topic-1 with leader 2 (state.change.logger) [data-plane-kafka-request-handler-1]
2020-04-01 08:53:34,952 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='test-topic', partitionIndex=1, controllerEpoch=1, leader=2, leaderEpoch=0, isr=[2, 1, 0], zkVersion=0, replicas=[2, 1, 0], offlineReplicas=[]) for partition test-topic-1 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 11 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:34,952 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='test-topic', partitionIndex=0, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0, 2, 1], zkVersion=0, replicas=[0, 2, 1], offlineReplicas=[]) for partition test-topic-0 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 11 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:34,952 TRACE [Broker id=0] Cached leader info UpdateMetadataPartitionState(topicName='test-topic', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 0, 2], zkVersion=0, replicas=[1, 0, 2], offlineReplicas=[]) for partition test-topic-2 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 11 (state.change.logger) [data-plane-kafka-request-handler-5]
2020-04-01 08:53:35,362 INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Truncating partition test-topic-2 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:35,362 INFO [Log partition=test-topic-2, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-1]
2020-04-01 08:53:35,391 INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition test-topic-1 to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2020-04-01 08:53:35,391 INFO [Log partition=test-topic-1, dir=/var/lib/kafka/data/kafka-log0] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log) [ReplicaFetcherThread-0-2]
2020-04-01 08:59:36,608 INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 09:05:24,120 DEBUG Token: {"aud":"00000002-0000-0000-c000-000000000000","iss":"<url>/","iat":1585731572,"nbf":1585731572,"exp":1585735472,"aio":"42dgYFA99Epot+z9E16qEnsNbnQFAgA=","appid":"9e22feae-32d3-488c-be69-0e112149f19b","appidacr":"1","idp":"<url>/","oid":"0c5b0cdb-e305-4b59-ae6c-541c2a5b8592","sub":"0c5b0cdb-e305-4b59-ae6c-541c2a5b8592","tenant_region_scope":"OC","tid":"xxx-xxx-xxx","uti":"SqGE0h18XEWBvoeUvPwUAA","ver":"1.0 "} (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
2020-04-01 09:05:24,207 DEBUG Access token expires at (UTC): 2020-04-01T10:04:32 (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
2020-04-01 09:05:24,215 WARN The cached public key with id 'YMELHT0gvb0mxoSDoYfomjqfjYU' is expired! (io.strimzi.kafka.oauth.validator.JWTSignatureValidator) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
2020-04-01 09:05:24,217 DEBUG Validation failed for token: eyJ0**qy3g (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
io.strimzi.kafka.oauth.validator.TokenValidationException: Token validation failed:
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:183)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.validateToken(JaasServerOauthValidatorCallbackHandler.java:224)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handleCallback(JaasServerOauthValidatorCallbackHandler.java:154)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handle(JaasServerOauthValidatorCallbackHandler.java:137)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.process(OAuthBearerSaslServer.java:156)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.evaluateResponse(OAuthBearerSaslServer.java:101)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:451)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:291)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
    at kafka.network.Processor.poll(SocketServer.scala:890)
    at kafka.network.Processor.run(SocketServer.scala:789)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.keycloak.common.VerificationException: Public key not set
    at org.keycloak.TokenVerifier.verifySignature(TokenVerifier.java:437)
    at org.keycloak.TokenVerifier.verify(TokenVerifier.java:462)
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:176)
    ... 13 more
2020-04-01 09:05:24,251 INFO [SocketServer brokerId=0] Failed authentication with gateway/10.131.0.1 ({"status":"invalid_token"}) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
2020-04-01 09:05:48,136 DEBUG Token: {"aud":"00000002-0000-0000-c000-000000000000","iss":"<url>/","iat":1585731572,"nbf":1585731572,"exp":1585735472,"aio":"42dgYFA99Epot+z9E16qEnsNbnQFAgA=","appid":"9e22feae-32d3-488c-be69-0e112149f19b","appidacr":"1","idp":"<url>/","oid":"0c5b0cdb-e305-4b59-ae6c-541c2a5b8592","sub":"0c5b0cdb-e305-4b59-ae6c-541c2a5b8592","tenant_region_scope":"OC","tid":"xxx-xxx-xxx","uti":"SqGE0h18XEWBvoeUvPwUAA","ver":"1.0 "} (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
2020-04-01 09:05:48,137 DEBUG Access token expires at (UTC): 2020-04-01T10:04:32 (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
2020-04-01 09:05:48,137 WARN The cached public key with id 'YMELHT0gvb0mxoSDoYfomjqfjYU' is expired! (io.strimzi.kafka.oauth.validator.JWTSignatureValidator) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
2020-04-01 09:05:48,138 DEBUG Validation failed for token: eyJ0**qy3g (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
io.strimzi.kafka.oauth.validator.TokenValidationException: Token validation failed:
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:183)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.validateToken(JaasServerOauthValidatorCallbackHandler.java:224)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handleCallback(JaasServerOauthValidatorCallbackHandler.java:154)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handle(JaasServerOauthValidatorCallbackHandler.java:137)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.process(OAuthBearerSaslServer.java:156)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.evaluateResponse(OAuthBearerSaslServer.java:101)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:451)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:291)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
    at kafka.network.Processor.poll(SocketServer.scala:890)
    at kafka.network.Processor.run(SocketServer.scala:789)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.keycloak.common.VerificationException: Public key not set
    at org.keycloak.TokenVerifier.verifySignature(TokenVerifier.java:437)
    at org.keycloak.TokenVerifier.verify(TokenVerifier.java:462)
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:176)
    ... 13 more
2020-04-01 09:05:48,167 INFO [SocketServer brokerId=0] Failed authentication with gateway/10.131.0.1 ({"status":"invalid_token"}) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-11]
2020-04-01 09:05:55,002 DEBUG Token: {"aud":"00000002-0000-0000-c000-000000000000","iss":"<url>/","iat":1585731572,"nbf":1585731572,"exp":1585735472,"aio":"42dgYFA99Epot+z9E16qEnsNbnQFAgA=","appid":"9e22feae-32d3-488c-be69-0e112149f19b","appidacr":"1","idp":"<url>/","oid":"0c5b0cdb-e305-4b59-ae6c-541c2a5b8592","sub":"0c5b0cdb-e305-4b59-ae6c-541c2a5b8592","tenant_region_scope":"OC","tid":"xxx-xxx-xxx","uti":"SqGE0h18XEWBvoeUvPwUAA","ver":"1.0 "} (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
2020-04-01 09:05:55,004 DEBUG Access token expires at (UTC): 2020-04-01T10:04:32 (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
2020-04-01 09:05:55,004 WARN The cached public key with id 'YMELHT0gvb0mxoSDoYfomjqfjYU' is expired! (io.strimzi.kafka.oauth.validator.JWTSignatureValidator) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
2020-04-01 09:05:55,004 DEBUG Validation failed for token: eyJ0**qy3g (io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
io.strimzi.kafka.oauth.validator.TokenValidationException: Token validation failed:
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:183)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.validateToken(JaasServerOauthValidatorCallbackHandler.java:224)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handleCallback(JaasServerOauthValidatorCallbackHandler.java:154)
    at io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler.handle(JaasServerOauthValidatorCallbackHandler.java:137)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.process(OAuthBearerSaslServer.java:156)
    at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer.evaluateResponse(OAuthBearerSaslServer.java:101)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:451)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:291)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
    at kafka.network.Processor.poll(SocketServer.scala:890)
    at kafka.network.Processor.run(SocketServer.scala:789)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.keycloak.common.VerificationException: Public key not set
    at org.keycloak.TokenVerifier.verifySignature(TokenVerifier.java:437)
    at org.keycloak.TokenVerifier.verify(TokenVerifier.java:462)
    at io.strimzi.kafka.oauth.validator.JWTSignatureValidator.validate(JWTSignatureValidator.java:176)
    ... 13 more
2020-04-01 09:05:55,034 INFO [SocketServer brokerId=0] Failed authentication with 10.130.0.1/10.130.0.1 ({"status":"invalid_token"}) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SASL_SSL-9]
2020-04-01 09:09:36,608 INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2020-04-01 09:19:36,608 INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]

And also can you provide me the direction on how to configure following properties

oauth.jwks.refresh.seconds="300"
oauth.jwks.expiry.seconds="960"

Thanks again

mstruk commented 4 years ago

I suggest that you try using the latest version of Strimzi that was just released - 0.17.0.

To configure different values for keys refresh period and expiry period you'd have to use Kafka CR as documented here.

For example, assuming you've configured OAuth support on external listener it would look something like:

external:
  type: loadbalancer
  authentication:
    type: oauth
    validIssuerUri: <url>/
    jwksEndpointUri: https://login.microsoftonline.com/xxx-xxx-xxx/discovery/v2.0/keys
    userNameClaim: preferred_username
    tlsTrustedCertificates:
    - secretName: oauth-server-cert
      certificate: ca.crt
    disableTlsHostnameVerification: true
    jwksExpirySeconds: 960
    jwksRefreshSeconds: 300

Note the last two lines.

scholzj commented 4 years ago

@mstruk @daxtergithub Are there still any open points on this? Or can this be closed?

mstruk commented 4 years ago

The first issue can't be addressed without replacing the internals of strimzi-kafka-oauth to use a different JWT parsing library.

For the second one there's also nothing more to add at this time.

@daxtergithub Did the proposed config change help?