Apicurio / apicurio-registry

An API/Schema registry - stores APIs and Schemas.
https://www.apicur.io/registry/
Apache License 2.0
587 stars 260 forks source link

io.apicurio.registry.storage.RuleNotFoundException: No rule named 'COMPATIBILITY' was found. #980

Closed ebbnflow closed 2 years ago

ebbnflow commented 3 years ago

I used the UI to add a proto schema. I then enabled the compatibility rule (which should not be clickable because proto schemas don't have support for compatibility rule). I then deleted the compatibility rule by clicking the delete button. Now every time I click the artifact link, the UI crashes and throws this error below. I've even deleted the schema and re-added it via curl. I've ran docker compose down and started everything up again, re-added the proto (without clicking on the compatibility rule) and this still happens.

I used this compose:

schemaregistry:
    container_name: schemaregistry
    image: apicurio/apicurio-registry-kafka:latest
    ports:
      - 8081:8080
    environment:
      KAFKA_BOOTSTRAP_SERVERS: broker:9092
      QUARKUS_PROFILE: prod
      APPLICATION_ID: registry_id
      APPLICATION_SERVER: localhost:9000
Stack trace.

io.apicurio.registry.storage.RuleNotFoundException: No rule named 'COMPATIBILITY' was found.
    at io.apicurio.registry.storage.impl.AbstractMapRegistryStorage.getArtifactRule(AbstractMapRegistryStorage.java:591)
    at io.apicurio.registry.kafka.KafkaRegistryStorage_Subclass.getArtifactRule$$superaccessor36(KafkaRegistryStorage_Subclass.zig:7678)
    at io.apicurio.registry.kafka.KafkaRegistryStorage_Subclass$$function$$36.apply(KafkaRegistryStorage_Subclass$$function$$36.zig:41)
    at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:54)
    at io.smallrye.metrics.interceptors.CountedInterceptor.countedCallable(CountedInterceptor.java:95)
    at io.smallrye.metrics.interceptors.CountedInterceptor.countedMethod(CountedInterceptor.java:70)
    at io.smallrye.metrics.interceptors.CountedInterceptor_Bean.intercept(CountedInterceptor_Bean.zig:366)
    at io.quarkus.arc.impl.InitializedInterceptor.intercept(InitializedInterceptor.java:79)
    at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41)
    at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50)
    at io.smallrye.metrics.interceptors.ConcurrentGaugeInterceptor.concurrentCallable(ConcurrentGaugeInterceptor.java:96)
    at io.smallrye.metrics.interceptors.ConcurrentGaugeInterceptor.countedMethod(ConcurrentGaugeInterceptor.java:69)
    at io.smallrye.metrics.interceptors.ConcurrentGaugeInterceptor_Bean.intercept(ConcurrentGaugeInterceptor_Bean.zig:366)
    at io.quarkus.arc.impl.InitializedInterceptor.intercept(InitializedInterceptor.java:79)
    at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41)
    at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50)
    at io.smallrye.metrics.interceptors.TimedInterceptor.timedCallable(TimedInterceptor.java:95)
    at io.smallrye.metrics.interceptors.TimedInterceptor.timedMethod(TimedInterceptor.java:70)
    at io.smallrye.metrics.interceptors.TimedInterceptor_Bean.intercept(TimedInterceptor_Bean.zig:366)
    at io.quarkus.arc.impl.InitializedInterceptor.intercept(InitializedInterceptor.java:79)
    at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41)
    at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50)
    at io.apicurio.registry.logging.LoggingInterceptor.logMethodEntry(LoggingInterceptor.java:55)
    at io.apicurio.registry.logging.LoggingInterceptor_Bean.intercept(LoggingInterceptor_Bean.zig:275)
    at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41)
    at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50)
    at io.apicurio.registry.metrics.PersistenceTimeoutReadinessInterceptor.intercept(PersistenceTimeoutReadinessInterceptor.java:27)
    at io.apicurio.registry.metrics.PersistenceTimeoutReadinessInterceptor_Bean.intercept(PersistenceTimeoutReadinessInterceptor_Bean.zig:327)
    at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41)
    at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50)
    at io.apicurio.registry.metrics.PersistenceExceptionLivenessInterceptor.intercept(PersistenceExceptionLivenessInterceptor.java:25)
    at io.apicurio.registry.metrics.PersistenceExceptionLivenessInterceptor_Bean.intercept(PersistenceExceptionLivenessInterceptor_Bean.zig:378)
    at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41)
    at io.quarkus.arc.impl.AroundInvokeInvocationContext.perform(AroundInvokeInvocationContext.java:41)
    at io.quarkus.arc.impl.InvocationContexts.performAroundInvoke(InvocationContexts.java:32)
    at io.apicurio.registry.kafka.KafkaRegistryStorage_Subclass.getArtifactRule(KafkaRegistryStorage_Subclass.zig:7606)
    at io.apicurio.registry.kafka.KafkaRegistryStorage_ClientProxy.getArtifactRule(KafkaRegistryStorage_ClientProxy.zig:1182)
    at io.apicurio.registry.storage.RegistryStorageProducer_ProducerMethod_realImpl_cf1c876861dd1c25dca504d30a12bfedeafd47bd_ClientProxy.getArtifactRule(RegistryStorageProducer_ProducerMethod_realImpl_cf1c876861dd1c25dca504d30a12bfedeafd47bd_ClientProxy.zig:743)
    at io.apicurio.registry.rest.ArtifactsResourceImpl.getArtifactRuleConfig(ArtifactsResourceImpl.java:505)
    at io.apicurio.registry.rest.ArtifactsResourceImpl_Subclass.getArtifactRuleConfig$$superaccessor30(ArtifactsResourceImpl_Subclass.zig:5236)
    at io.apicurio.registry.rest.ArtifactsResourceImpl_Subclass$$function$$30.apply(ArtifactsResourceImpl_Subclass$$function$$30.zig:41)
    at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:54)
    at io.smallrye.metrics.interceptors.CountedInterceptor.countedCallable(CountedInterceptor.java:95)
    at io.smallrye.metrics.interceptors.CountedInterceptor.countedMethod(CountedInterceptor.java:70)
    at io.smallrye.metrics.interceptors.CountedInterceptor_Bean.intercept(CountedInterceptor_Bean.zig:366)
    at io.quarkus.arc.impl.InitializedInterceptor.intercept(InitializedInterceptor.java:79)
    at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41)
    at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50)
    at io.smallrye.metrics.interceptors.ConcurrentGaugeInterceptor.concurrentCallable(ConcurrentGaugeInterceptor.java:96)
    at io.smallrye.metrics.interceptors.ConcurrentGaugeInterceptor.countedMethod(ConcurrentGaugeInterceptor.java:69)
    at io.smallrye.metrics.interceptors.ConcurrentGaugeInterceptor_Bean.intercept(ConcurrentGaugeInterceptor_Bean.zig:366)
    at io.quarkus.arc.impl.InitializedInterceptor.intercept(InitializedInterceptor.java:79)
    at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41)
    at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50)
    at io.smallrye.metrics.interceptors.TimedInterceptor.timedCallable(TimedInterceptor.java:95)
    at io.smallrye.metrics.interceptors.TimedInterceptor.timedMethod(TimedInterceptor.java:70)
    at io.smallrye.metrics.interceptors.TimedInterceptor_Bean.intercept(TimedInterceptor_Bean.zig:366)
    at io.quarkus.arc.impl.InitializedInterceptor.intercept(InitializedInterceptor.java:79)
    at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41)
    at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50)
    at io.apicurio.registry.logging.LoggingInterceptor.logMethodEntry(LoggingInterceptor.java:55)
    at io.apicurio.registry.logging.LoggingInterceptor_Bean.intercept(LoggingInterceptor_Bean.zig:275)
    at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41)
    at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50)
    at io.apicurio.registry.metrics.RestMetricsInterceptor.intercept(RestMetricsInterceptor.java:82)
    at io.apicurio.registry.metrics.RestMetricsInterceptor_Bean.intercept(RestMetricsInterceptor_Bean.zig:327)
    at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41)
    at io.quarkus.arc.impl.AroundInvokeInvocationContext.perform(AroundInvokeInvocationContext.java:41)
    at io.quarkus.arc.impl.InvocationContexts.performAroundInvoke(InvocationContexts.java:32)
    at io.apicurio.registry.rest.ArtifactsResourceImpl_Subclass.getArtifactRuleConfig(ArtifactsResourceImpl_Subclass.zig:5191)
    at io.apicurio.registry.rest.ArtifactsResourceImpl_ClientProxy.getArtifactRuleConfig(ArtifactsResourceImpl_ClientProxy.zig:665)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:170)
    at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:130)
    at org.jboss.resteasy.core.ResourceMethodInvoker.internalInvokeOnTarget(ResourceMethodInvoker.java:643)
    at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTargetAfterFilter(ResourceMethodInvoker.java:507)
    at org.jboss.resteasy.core.ResourceMethodInvoker.lambda$invokeOnTarget$2(ResourceMethodInvoker.java:457)
    at org.jboss.resteasy.core.interception.jaxrs.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:364)
    at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:459)
    at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:419)
    at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:393)
    at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:68)
    at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:492)
    at org.jboss.resteasy.core.SynchronousDispatcher.lambda$invoke$4(SynchronousDispatcher.java:261)
    at org.jboss.resteasy.core.SynchronousDispatcher.lambda$preprocess$0(SynchronousDispatcher.java:161)
    at org.jboss.resteasy.core.interception.jaxrs.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:364)
    at org.jboss.resteasy.core.SynchronousDispatcher.preprocess(SynchronousDispatcher.java:164)
    at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:247)
    at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:249)
    at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:60)
    at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:55)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:590)
    at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74)
    at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
    at io.apicurio.registry.ui.servlets.ResourceCacheControlFilter.doFilter(ResourceCacheControlFilter.java:83)
    at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
    at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
    at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
    at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:63)
    at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68)
    at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
    at io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:67)
    at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:133)
    at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
    at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
    at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
    at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:65)
    at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
    at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
    at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
    at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
    at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
    at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
    at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:247)
    at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:56)
    at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:111)
    at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:108)
    at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
    at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
    at io.quarkus.undertow.runtime.UndertowDeploymentRecorder$10$1.call(UndertowDeploymentRecorder.java:573)
    at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:227)
    at io.undertow.servlet.handlers.ServletInitialHandler.handleRequest(ServletInitialHandler.java:152)
    at io.undertow.server.handlers.HttpContinueReadHandler.handleRequest(HttpContinueReadHandler.java:43)
    at io.quarkus.undertow.runtime.UndertowDeploymentRecorder$1.handleRequest(UndertowDeploymentRecorder.java:114)
    at io.undertow.server.Connectors.executeRootHandler(Connectors.java:290)
    at io.undertow.server.DefaultExchangeHandler.handle(DefaultExchangeHandler.java:18)
    at io.quarkus.undertow.runtime.UndertowDeploymentRecorder$6$1.run(UndertowDeploymentRecorder.java:404)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
    at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:2046)
    at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1578)
    at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1452)
    at org.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:29)
    at org.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:29)
    at java.lang.Thread.run(Thread.java:748)
    at org.jboss.threads.JBossThread.run(JBossThread.java:479)
carlesarnal commented 3 years ago

Hi @ebbnflow, the Kafka storage variant that you're using is going to be deprecated/removed. Please, could you give a try to any other of the storage variants?

ebbnflow commented 3 years ago

Hi @ebbnflow, the Kafka storage variant that you're using is going to be deprecated/removed. Please, could you give a try to any other of the storage variants?

Meaning.... kafka won't be supported for storing the schemas anymore?

carlesarnal commented 3 years ago

Nope, we're just deprecating the plain Kafka in favour of this.

EricWittmann commented 3 years ago

The original Kafka implementation was deprecated awhile back and just removed today from master. The Streams implementation is what you want, as Carles linked above.

@carlesarnal can you test this using the other storage variants to ensure it was only a problem with the old kafka impl?

ebbnflow commented 3 years ago

I've converted over to use kstreams but alas, it's not working either: I've tried 1.3.2.final and 1.1.1.final

  schemaregistry:
    container_name: schemaregistry
    image: apicurio/apicurio-registry-streams:1.1.1.Final
    ports:
      - 8081:8080
    environment:
      KAFKA_BOOTSTRAP_SERVERS: broker:9092
      QUARKUS_PROFILE: prod
      APPLICATION_ID: registry_id
      APPLICATION_SERVER: localhost:9000

2020-11-03 19:24:23,701 ERROR [org.apa.kaf.str.pro.int.StreamTask] (registry_id-1b1e6789-82c4-46bb-ad96-74e1ddf85c03-StreamThread-1) task [1_0] Timeout exception caught when initializing transactions for task 1_0. This might happen if the broker is slow to respond, if the network connection to the broker was interrupted, or if similar circumstances arise. You can increase producer parametermax.block.msto increase this timeout.: org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000milliseconds while awaiting InitProducerId

EricWittmann commented 3 years ago

Ping : @alesj @famartinrh @jsenko

Any insights into this error?

@ebbnflow Can you provide the full server log?

ebbnflow commented 3 years ago

Ping : @alesj @famartinrh @jsenko

Any insights into this error?

@ebbnflow Can you provide the full server log?

Seems to be related to this maybe? https://issues.apache.org/jira/browse/KAFKA-8803


__  ____  __  _____   ___  __ ____  ______ 
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ 
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/   
2020-11-04 14:17:29,685 WARN  [io.qua.config] (main) Unrecognized configuration key "quarkus.datasource.username" was provided; it will be ignored; verify that the dependency extension for this configuration is set or you did not make a typo
2020-11-04 14:17:29,685 WARN  [io.qua.config] (main) Unrecognized configuration key "quarkus.datasource.jdbc.url" was provided; it will be ignored; verify that the dependency extension for this configuration is set or you did not make a typo
2020-11-04 14:17:29,685 WARN  [io.qua.config] (main) Unrecognized configuration key "quarkus.hibernate-orm.database.generation" was provided; it will be ignored; verify that the dependency extension for this configuration is set or you did not make a typo
2020-11-04 14:17:29,685 WARN  [io.qua.config] (main) Unrecognized configuration key "quarkus.datasource.db-kind" was provided; it will be ignored; verify that the dependency extension for this configuration is set or you did not make a typo
2020-11-04 14:17:29,685 WARN  [io.qua.config] (main) Unrecognized configuration key "quarkus.datasource.password" was provided; it will be ignored; verify that the dependency extension for this configuration is set or you did not make a typo
2020-11-04 14:17:30,807 INFO  [org.apa.kaf.con.jso.JsonConverterConfig] (main) JsonConverterConfig values: 
    converter.type = key
    decimal.format = BASE64
    schemas.cache.size = 0
    schemas.enable = true

2020-11-04 14:17:32,584 INFO  [org.apa.kaf.str.StreamsConfig] (main) StreamsConfig values: 
    application.id = registry_id
    application.server = localhost:9000
    bootstrap.servers = [broker:9092]
    buffered.records.per.partition = 1000
    built.in.metrics.version = latest
    cache.max.bytes.buffering = 10485760
    client.id = 
    commit.interval.ms = 100
    connections.max.idle.ms = 540000
    default.deserialization.exception.handler = class org.apache.kafka.streams.errors.LogAndFailExceptionHandler
    default.key.serde = class org.apache.kafka.common.serialization.Serdes$ByteArraySerde
    default.production.exception.handler = class org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
    default.timestamp.extractor = class org.apache.kafka.streams.processor.FailOnInvalidTimestamp
    default.value.serde = class org.apache.kafka.common.serialization.Serdes$ByteArraySerde
    max.task.idle.ms = 0
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    num.standby.replicas = 1
    num.stream.threads = 2
    partition.grouper = class org.apache.kafka.streams.processor.DefaultPartitionGrouper
    poll.ms = 100
    processing.guarantee = exactly_once
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    replication.factor = 1
    request.timeout.ms = 40000
    retries = 0
    retry.backoff.ms = 100
    rocksdb.config.setter = null
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    state.cleanup.delay.ms = 600000
    state.dir = /tmp/kafka-streams
    topology.optimization = none
    upgrade.from = null
    windowstore.changelog.additional.retention.ms = 86400000

2020-11-04 14:17:32,716 INFO  [org.apa.kaf.str.KafkaStreams] (main) stream-client [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f] Kafka Streams version: 2.5.0
2020-11-04 14:17:32,717 INFO  [org.apa.kaf.str.KafkaStreams] (main) stream-client [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f] Kafka Streams commit ID: 66563e712b0b9f84
2020-11-04 14:17:32,787 INFO  [org.apa.kaf.cli.adm.AdminClientConfig] (main) AdminClientConfig values: 
    bootstrap.servers = [broker:9092]
    client.dns.lookup = default
    client.id = registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-admin
    connections.max.idle.ms = 300000
    default.api.timeout.ms = 60000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.2
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS

2020-11-04 14:17:32,928 WARN  [org.apa.kaf.cli.adm.AdminClientConfig] (main) The configuration 'storage.topic' was supplied but isn't a known config.
2020-11-04 14:17:32,928 WARN  [org.apa.kaf.cli.adm.AdminClientConfig] (main) The configuration 'global.id.topic' was supplied but isn't a known config.
2020-11-04 14:17:32,932 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka version: 2.5.0
2020-11-04 14:17:32,932 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka commitId: 66563e712b0b9f84
2020-11-04 14:17:32,933 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka startTimeMs: 1604499452928
2020-11-04 14:17:32,937 INFO  [org.apa.kaf.str.pro.int.StreamThread] (main) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] Creating restore consumer client
2020-11-04 14:17:32,946 INFO  [org.apa.kaf.cli.con.ConsumerConfig] (main) ConsumerConfig values: 
    allow.auto.create.topics = true
    auto.commit.interval.ms = 5000
    auto.offset.reset = none
    bootstrap.servers = [broker:9092]
    check.crcs = true
    client.dns.lookup = default
    client.id = registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-restore-consumer
    client.rack = 
    connections.max.idle.ms = 540000
    default.api.timeout.ms = 60000
    enable.auto.commit = false
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = null
    group.instance.id = null
    heartbeat.interval.ms = 3000
    interceptor.classes = []
    internal.leave.group.on.close = false
    isolation.level = read_committed
    key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 1000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.2
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer

2020-11-04 14:17:33,009 WARN  [org.apa.kaf.cli.con.ConsumerConfig] (main) The configuration 'storage.topic' was supplied but isn't a known config.
2020-11-04 14:17:33,010 WARN  [org.apa.kaf.cli.con.ConsumerConfig] (main) The configuration 'global.id.topic' was supplied but isn't a known config.
2020-11-04 14:17:33,010 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka version: 2.5.0
2020-11-04 14:17:33,010 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka commitId: 66563e712b0b9f84
2020-11-04 14:17:33,010 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka startTimeMs: 1604499453010
2020-11-04 14:17:33,027 INFO  [org.apa.kaf.str.pro.int.StreamThread] (main) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] Creating consumer client
2020-11-04 14:17:33,030 INFO  [org.apa.kaf.cli.con.ConsumerConfig] (main) ConsumerConfig values: 
    allow.auto.create.topics = false
    auto.commit.interval.ms = 5000
    auto.offset.reset = earliest
    bootstrap.servers = [broker:9092]
    check.crcs = true
    client.dns.lookup = default
    client.id = registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer
    client.rack = 
    connections.max.idle.ms = 540000
    default.api.timeout.ms = 60000
    enable.auto.commit = false
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = registry_id
    group.instance.id = null
    heartbeat.interval.ms = 3000
    interceptor.classes = []
    internal.leave.group.on.close = false
    isolation.level = read_committed
    key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 1000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.2
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer

2020-11-04 14:17:33,048 INFO  [org.apa.kaf.str.pro.int.ass.AssignorConfiguration] (main) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer] Cooperative rebalancing enabled now
2020-11-04 14:17:33,079 WARN  [org.apa.kaf.cli.con.ConsumerConfig] (main) The configuration 'storage.topic' was supplied but isn't a known config.
2020-11-04 14:17:33,079 WARN  [org.apa.kaf.cli.con.ConsumerConfig] (main) The configuration 'admin.retry.backoff.ms' was supplied but isn't a known config.
2020-11-04 14:17:33,079 WARN  [org.apa.kaf.cli.con.ConsumerConfig] (main) The configuration 'admin.retries' was supplied but isn't a known config.
2020-11-04 14:17:33,079 WARN  [org.apa.kaf.cli.con.ConsumerConfig] (main) The configuration 'global.id.topic' was supplied but isn't a known config.
2020-11-04 14:17:33,079 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka version: 2.5.0
2020-11-04 14:17:33,079 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka commitId: 66563e712b0b9f84
2020-11-04 14:17:33,080 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka startTimeMs: 1604499453079
2020-11-04 14:17:33,087 INFO  [org.apa.kaf.str.pro.int.StreamThread] (main) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] Creating restore consumer client
2020-11-04 14:17:33,087 INFO  [org.apa.kaf.cli.con.ConsumerConfig] (main) ConsumerConfig values: 
    allow.auto.create.topics = true
    auto.commit.interval.ms = 5000
    auto.offset.reset = none
    bootstrap.servers = [broker:9092]
    check.crcs = true
    client.dns.lookup = default
    client.id = registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-restore-consumer
    client.rack = 
    connections.max.idle.ms = 540000
    default.api.timeout.ms = 60000
    enable.auto.commit = false
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = null
    group.instance.id = null
    heartbeat.interval.ms = 3000
    interceptor.classes = []
    internal.leave.group.on.close = false
    isolation.level = read_committed
    key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 1000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.2
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer

2020-11-04 14:17:33,094 WARN  [org.apa.kaf.cli.con.ConsumerConfig] (main) The configuration 'storage.topic' was supplied but isn't a known config.
2020-11-04 14:17:33,094 WARN  [org.apa.kaf.cli.con.ConsumerConfig] (main) The configuration 'global.id.topic' was supplied but isn't a known config.
2020-11-04 14:17:33,094 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka version: 2.5.0
2020-11-04 14:17:33,094 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka commitId: 66563e712b0b9f84
2020-11-04 14:17:33,094 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka startTimeMs: 1604499453094
2020-11-04 14:17:33,095 INFO  [org.apa.kaf.str.pro.int.StreamThread] (main) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] Creating consumer client
2020-11-04 14:17:33,096 INFO  [org.apa.kaf.cli.con.ConsumerConfig] (main) ConsumerConfig values: 
    allow.auto.create.topics = false
    auto.commit.interval.ms = 5000
    auto.offset.reset = earliest
    bootstrap.servers = [broker:9092]
    check.crcs = true
    client.dns.lookup = default
    client.id = registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-consumer
    client.rack = 
    connections.max.idle.ms = 540000
    default.api.timeout.ms = 60000
    enable.auto.commit = false
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = registry_id
    group.instance.id = null
    heartbeat.interval.ms = 3000
    interceptor.classes = []
    internal.leave.group.on.close = false
    isolation.level = read_committed
    key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 1000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.2
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer

2020-11-04 14:17:33,101 INFO  [org.apa.kaf.str.pro.int.ass.AssignorConfiguration] (main) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-consumer] Cooperative rebalancing enabled now
2020-11-04 14:17:33,105 WARN  [org.apa.kaf.cli.con.ConsumerConfig] (main) The configuration 'storage.topic' was supplied but isn't a known config.
2020-11-04 14:17:33,105 WARN  [org.apa.kaf.cli.con.ConsumerConfig] (main) The configuration 'admin.retry.backoff.ms' was supplied but isn't a known config.
2020-11-04 14:17:33,105 WARN  [org.apa.kaf.cli.con.ConsumerConfig] (main) The configuration 'admin.retries' was supplied but isn't a known config.
2020-11-04 14:17:33,105 WARN  [org.apa.kaf.cli.con.ConsumerConfig] (main) The configuration 'global.id.topic' was supplied but isn't a known config.
2020-11-04 14:17:33,105 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka version: 2.5.0
2020-11-04 14:17:33,106 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka commitId: 66563e712b0b9f84
2020-11-04 14:17:33,106 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka startTimeMs: 1604499453105
2020-11-04 14:17:33,112 INFO  [org.apa.kaf.str.KafkaStreams] (main) stream-client [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f] State transition from CREATED to REBALANCING
2020-11-04 14:17:33,118 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] Starting
2020-11-04 14:17:33,119 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] State transition from CREATED to STARTING
2020-11-04 14:17:33,119 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] Starting
2020-11-04 14:17:33,119 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] State transition from CREATED to STARTING
2020-11-04 14:17:33,120 INFO  [org.apa.kaf.cli.con.KafkaConsumer] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer, groupId=registry_id] Subscribed to topic(s): global-id-topic, storage-topic
2020-11-04 14:17:33,120 INFO  [org.apa.kaf.cli.con.KafkaConsumer] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-consumer, groupId=registry_id] Subscribed to topic(s): global-id-topic, storage-topic
2020-11-04 14:17:33,122 INFO  [io.api.reg.str.StreamsRegistryConfiguration] (main) Application server gRPC: 'localhost:9000'
2020-11-04 14:17:33,694 INFO  [org.apa.kaf.cli.Metadata] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-consumer, groupId=registry_id] Cluster ID: 2LcONzwOQq-e7f9ONCpvTA
2020-11-04 14:17:33,694 INFO  [org.apa.kaf.cli.Metadata] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer, groupId=registry_id] Cluster ID: 2LcONzwOQq-e7f9ONCpvTA
2020-11-04 14:17:33,697 INFO  [org.apa.kaf.cli.con.int.AbstractCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer, groupId=registry_id] Discovered group coordinator broker:9092 (id: 2147483646 rack: null)
2020-11-04 14:17:33,697 INFO  [org.apa.kaf.cli.con.int.AbstractCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-consumer, groupId=registry_id] Discovered group coordinator broker:9092 (id: 2147483646 rack: null)
2020-11-04 14:17:33,709 INFO  [org.apa.kaf.cli.con.int.AbstractCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-consumer, groupId=registry_id] (Re-)joining group
2020-11-04 14:17:33,709 INFO  [org.apa.kaf.cli.con.int.AbstractCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer, groupId=registry_id] (Re-)joining group
2020-11-04 14:17:33,756 INFO  [org.apa.kaf.cli.con.int.AbstractCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer, groupId=registry_id] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
2020-11-04 14:17:33,756 INFO  [org.apa.kaf.cli.con.int.AbstractCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer, groupId=registry_id] (Re-)joining group
2020-11-04 14:17:33,757 INFO  [org.apa.kaf.cli.con.int.AbstractCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-consumer, groupId=registry_id] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
2020-11-04 14:17:33,758 INFO  [org.apa.kaf.cli.con.int.AbstractCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-consumer, groupId=registry_id] (Re-)joining group
2020-11-04 14:17:34,076 INFO  [io.quarkus] (main) apicurio-registry-storage-streams 1.3.2.Final on JVM (powered by Quarkus 1.9.0.Final) started in 4.732s. Listening on: http://0.0.0.0:8080
2020-11-04 14:17:34,077 INFO  [io.quarkus] (main) Profile prod activated. 
2020-11-04 14:17:34,077 INFO  [io.quarkus] (main) Installed features: [cdi, resteasy, resteasy-jackson, servlet, smallrye-health, smallrye-metrics, smallrye-openapi]
2020-11-04 14:17:39,793 WARN  [org.apa.kaf.str.pro.int.ass.StickyTaskAssignor] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) Unable to assign 1 of 1 standby tasks for task [0_0]. There is not enough available capacity. You should increase the number of threads and/or application instances to maintain the requested number of standby replicas.
2020-11-04 14:17:39,793 WARN  [org.apa.kaf.str.pro.int.ass.StickyTaskAssignor] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) Unable to assign 1 of 1 standby tasks for task [1_0]. There is not enough available capacity. You should increase the number of threads and/or application instances to maintain the requested number of standby replicas.
2020-11-04 14:17:39,797 INFO  [org.apa.kaf.str.pro.int.StreamsPartitionAssignor] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer] Assigned tasks to clients as 
0aafbcc9-8e3c-4635-83c0-60dab2f83d3f=[activeTasks: ([0_0, 1_0]) standbyTasks: ([]) assignedTasks: ([0_0, 1_0]) prevActiveTasks: ([]) prevStandbyTasks: ([]) prevAssignedTasks: ([]) prevOwnedPartitionsByConsumerId: ([]) capacity: 2].
2020-11-04 14:17:39,808 INFO  [org.apa.kaf.cli.con.int.ConsumerCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer, groupId=registry_id] Finished assignment for group at generation 1: {registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-consumer-09029163-14e8-4199-836e-ad0de4ed5836=Assignment(partitions=[global-id-topic-0], userDataSize=134), registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer-81ac4869-e6e1-4f7d-a996-8aa7a19f0064=Assignment(partitions=[storage-topic-0], userDataSize=134)}
2020-11-04 14:17:39,814 INFO  [org.apa.kaf.cli.con.int.AbstractCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-consumer, groupId=registry_id] Successfully joined group with generation 1
2020-11-04 14:17:39,814 INFO  [org.apa.kaf.cli.con.int.AbstractCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer, groupId=registry_id] Successfully joined group with generation 1
2020-11-04 14:17:39,814 INFO  [org.apa.kaf.cli.con.int.ConsumerCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-consumer, groupId=registry_id] Updating assignment with
now assigned partitions: global-id-topic-0
compare with previously owned partitions: 
newly added partitions: global-id-topic-0
revoked partitions: 

2020-11-04 14:17:39,815 INFO  [org.apa.kaf.cli.con.int.ConsumerCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer, groupId=registry_id] Updating assignment with
now assigned partitions: storage-topic-0
compare with previously owned partitions: 
newly added partitions: storage-topic-0
revoked partitions: 

2020-11-04 14:17:39,824 INFO  [org.apa.kaf.cli.con.int.ConsumerCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-consumer, groupId=registry_id] Adding newly assigned partitions: global-id-topic-0
2020-11-04 14:17:39,824 INFO  [org.apa.kaf.cli.con.int.ConsumerCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer, groupId=registry_id] Adding newly assigned partitions: storage-topic-0
2020-11-04 14:17:39,825 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] State transition from STARTING to PARTITIONS_ASSIGNED
2020-11-04 14:17:39,825 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] State transition from STARTING to PARTITIONS_ASSIGNED
2020-11-04 14:17:39,842 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] Creating producer client for task 0_0
2020-11-04 14:17:39,842 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] Creating producer client for task 1_0
2020-11-04 14:17:39,851 INFO  [org.apa.kaf.cli.pro.ProducerConfig] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) ProducerConfig values: 
    acks = -1
    batch.size = 16384
    bootstrap.servers = [broker:9092]
    buffer.memory = 33554432
    client.dns.lookup = default
    client.id = registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-0_0-producer
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 2147483647
    enable.idempotence = true
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
    linger.ms = 100
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metadata.max.idle.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.2
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = registry_id-0_0
    value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer

2020-11-04 14:17:39,849 INFO  [org.apa.kaf.cli.pro.ProducerConfig] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) ProducerConfig values: 
    acks = -1
    batch.size = 16384
    bootstrap.servers = [broker:9092]
    buffer.memory = 33554432
    client.dns.lookup = default
    client.id = registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-1_0-producer
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 2147483647
    enable.idempotence = true
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
    linger.ms = 100
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metadata.max.idle.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.2
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = registry_id-1_0
    value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer

2020-11-04 14:17:39,875 INFO  [org.apa.kaf.cli.pro.KafkaProducer] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Producer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-0_0-producer, transactionalId=registry_id-0_0] Instantiated a transactional producer.
2020-11-04 14:17:39,875 INFO  [org.apa.kaf.cli.pro.KafkaProducer] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Producer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-1_0-producer, transactionalId=registry_id-1_0] Instantiated a transactional producer.
2020-11-04 14:17:39,894 INFO  [org.apa.kaf.cli.pro.KafkaProducer] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Producer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-0_0-producer, transactionalId=registry_id-0_0] Overriding the default retries config to the recommended value of 2147483647 since the idempotent producer is enabled.
2020-11-04 14:17:39,894 INFO  [org.apa.kaf.cli.pro.KafkaProducer] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Producer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-0_0-producer, transactionalId=registry_id-0_0] Overriding the default acks to all since idempotence is enabled.
2020-11-04 14:17:39,895 INFO  [org.apa.kaf.cli.pro.KafkaProducer] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Producer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-1_0-producer, transactionalId=registry_id-1_0] Overriding the default retries config to the recommended value of 2147483647 since the idempotent producer is enabled.
2020-11-04 14:17:39,895 INFO  [org.apa.kaf.cli.pro.KafkaProducer] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Producer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-1_0-producer, transactionalId=registry_id-1_0] Overriding the default acks to all since idempotence is enabled.
2020-11-04 14:17:39,905 WARN  [org.apa.kaf.cli.pro.ProducerConfig] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) The configuration 'storage.topic' was supplied but isn't a known config.
2020-11-04 14:17:39,905 WARN  [org.apa.kaf.cli.pro.ProducerConfig] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) The configuration 'global.id.topic' was supplied but isn't a known config.
2020-11-04 14:17:39,905 INFO  [org.apa.kaf.com.uti.AppInfoParser] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) Kafka version: 2.5.0
2020-11-04 14:17:39,906 WARN  [org.apa.kaf.cli.pro.ProducerConfig] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) The configuration 'storage.topic' was supplied but isn't a known config.
2020-11-04 14:17:39,907 WARN  [org.apa.kaf.cli.pro.ProducerConfig] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) The configuration 'global.id.topic' was supplied but isn't a known config.
2020-11-04 14:17:39,905 INFO  [org.apa.kaf.com.uti.AppInfoParser] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) Kafka commitId: 66563e712b0b9f84
2020-11-04 14:17:39,907 INFO  [org.apa.kaf.com.uti.AppInfoParser] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) Kafka startTimeMs: 1604499459905
2020-11-04 14:17:39,914 INFO  [org.apa.kaf.com.uti.AppInfoParser] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) Kafka version: 2.5.0
2020-11-04 14:17:39,915 INFO  [org.apa.kaf.com.uti.AppInfoParser] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) Kafka commitId: 66563e712b0b9f84
2020-11-04 14:17:39,915 INFO  [org.apa.kaf.com.uti.AppInfoParser] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) Kafka startTimeMs: 1604499459907
2020-11-04 14:17:39,921 INFO  [org.apa.kaf.cli.Metadata] (kafka-producer-network-thread | registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-1_0-producer) [Producer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-1_0-producer, transactionalId=registry_id-1_0] Cluster ID: 2LcONzwOQq-e7f9ONCpvTA
2020-11-04 14:17:39,923 INFO  [org.apa.kaf.cli.Metadata] (kafka-producer-network-thread | registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-0_0-producer) [Producer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-0_0-producer, transactionalId=registry_id-0_0] Cluster ID: 2LcONzwOQq-e7f9ONCpvTA
2020-11-04 14:17:39,940 INFO  [org.apa.kaf.cli.pro.int.TransactionManager] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Producer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-0_0-producer, transactionalId=registry_id-0_0] Invoking InitProducerId for the first time in order to acquire a producer ID
2020-11-04 14:17:39,940 INFO  [org.apa.kaf.cli.pro.int.TransactionManager] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Producer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-1_0-producer, transactionalId=registry_id-1_0] Invoking InitProducerId for the first time in order to acquire a producer ID
2020-11-04 14:18:39,876 ERROR [org.apa.kaf.str.pro.int.StreamTask] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] task [0_0] Timeout exception caught when initializing transactions for task 0_0. This might happen if the broker is slow to respond, if the network connection to the broker was interrupted, or if similar circumstances arise. You can increase producer parameter `max.block.ms` to increase this timeout.: org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000milliseconds while awaiting InitProducerId

2020-11-04 14:18:39,876 ERROR [org.apa.kaf.str.pro.int.StreamTask] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] task [1_0] Timeout exception caught when initializing transactions for task 1_0. This might happen if the broker is slow to respond, if the network connection to the broker was interrupted, or if similar circumstances arise. You can increase producer parameter `max.block.ms` to increase this timeout.: org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000milliseconds while awaiting InitProducerId

2020-11-04 14:18:39,878 ERROR [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] Error caught during partition assignment, will abort the current process and re-throw at the end of rebalance: org.apache.kafka.streams.errors.StreamsException: stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] task [1_0] Failed to initialize task 1_0 due to timeout.
    at org.apache.kafka.streams.processor.internals.StreamTask.initializeTransactions(StreamTask.java:923)
    at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:206)
    at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:115)
    at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:352)
    at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:310)
    at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:295)
    at org.apache.kafka.streams.processor.internals.TaskManager.addNewActiveTasks(TaskManager.java:160)
    at org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:120)
    at org.apache.kafka.streams.processor.internals.StreamsRebalanceListener.onPartitionsAssigned(StreamsRebalanceListener.java:77)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:278)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:419)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:439)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:358)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:490)
    at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1275)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1241)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1216)
    at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:853)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:753)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:697)
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:670)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000milliseconds while awaiting InitProducerId

2020-11-04 14:18:39,878 ERROR [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] Error caught during partition assignment, will abort the current process and re-throw at the end of rebalance: org.apache.kafka.streams.errors.StreamsException: stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] task [0_0] Failed to initialize task 0_0 due to timeout.
    at org.apache.kafka.streams.processor.internals.StreamTask.initializeTransactions(StreamTask.java:923)
    at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:206)
    at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:115)
    at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:352)
    at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:310)
    at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:295)
    at org.apache.kafka.streams.processor.internals.TaskManager.addNewActiveTasks(TaskManager.java:160)
    at org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:120)
    at org.apache.kafka.streams.processor.internals.StreamsRebalanceListener.onPartitionsAssigned(StreamsRebalanceListener.java:77)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:278)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:419)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:439)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:358)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:490)
    at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1275)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1241)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1216)
    at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:853)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:753)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:697)
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:670)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000milliseconds while awaiting InitProducerId

2020-11-04 14:18:39,879 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] partition assignment took 60054 ms.
    currently assigned active tasks: []
    currently assigned standby tasks: []
    revoked active tasks: []
    revoked standby tasks: []

2020-11-04 14:18:39,879 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] partition assignment took 60054 ms.
    currently assigned active tasks: []
    currently assigned standby tasks: []
    revoked active tasks: []
    revoked standby tasks: []

2020-11-04 14:18:39,904 INFO  [org.apa.kaf.cli.con.int.AbstractCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-consumer, groupId=registry_id] Group coordinator broker:9092 (id: 2147483646 rack: null) is unavailable or invalid, will attempt rediscovery
2020-11-04 14:18:39,904 INFO  [org.apa.kaf.cli.con.int.AbstractCoordinator] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-consumer, groupId=registry_id] Group coordinator broker:9092 (id: 2147483646 rack: null) is unavailable or invalid, will attempt rediscovery
2020-11-04 14:18:39,912 ERROR [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] Encountered the following unexpected Kafka exception during processing, this usually indicate Streams internal errors:: org.apache.kafka.streams.errors.StreamsException: stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] Failed to rebalance.
    at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:862)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:753)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:697)
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:670)
Caused by: org.apache.kafka.streams.errors.StreamsException: stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] task [0_0] Failed to initialize task 0_0 due to timeout.
    at org.apache.kafka.streams.processor.internals.StreamTask.initializeTransactions(StreamTask.java:923)
    at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:206)
    at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:115)
    at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:352)
    at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:310)
    at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:295)
    at org.apache.kafka.streams.processor.internals.TaskManager.addNewActiveTasks(TaskManager.java:160)
    at org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:120)
    at org.apache.kafka.streams.processor.internals.StreamsRebalanceListener.onPartitionsAssigned(StreamsRebalanceListener.java:77)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:278)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:419)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:439)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:358)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:490)
    at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1275)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1241)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1216)
    at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:853)
    ... 3 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000milliseconds while awaiting InitProducerId

2020-11-04 14:18:39,913 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] State transition from PARTITIONS_ASSIGNED to PENDING_SHUTDOWN
2020-11-04 14:18:39,912 ERROR [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] Encountered the following unexpected Kafka exception during processing, this usually indicate Streams internal errors:: org.apache.kafka.streams.errors.StreamsException: stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] Failed to rebalance.
    at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:862)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:753)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:697)
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:670)
Caused by: org.apache.kafka.streams.errors.StreamsException: stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] task [1_0] Failed to initialize task 1_0 due to timeout.
    at org.apache.kafka.streams.processor.internals.StreamTask.initializeTransactions(StreamTask.java:923)
    at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:206)
    at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:115)
    at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:352)
    at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:310)
    at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:295)
    at org.apache.kafka.streams.processor.internals.TaskManager.addNewActiveTasks(TaskManager.java:160)
    at org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:120)
    at org.apache.kafka.streams.processor.internals.StreamsRebalanceListener.onPartitionsAssigned(StreamsRebalanceListener.java:77)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:278)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:419)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:439)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:358)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:490)
    at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1275)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1241)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1216)
    at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:853)
    ... 3 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000milliseconds while awaiting InitProducerId

2020-11-04 14:18:39,913 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] Shutting down
2020-11-04 14:18:39,913 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] State transition from PARTITIONS_ASSIGNED to PENDING_SHUTDOWN
2020-11-04 14:18:39,913 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] Shutting down
2020-11-04 14:18:39,914 INFO  [org.apa.kaf.cli.con.KafkaConsumer] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
2020-11-04 14:18:39,914 INFO  [org.apa.kaf.cli.con.KafkaConsumer] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) [Consumer clientId=registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
2020-11-04 14:18:39,937 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] State transition from PENDING_SHUTDOWN to DEAD
2020-11-04 14:18:39,937 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-1] Shutdown complete
2020-11-04 14:18:39,938 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] State transition from PENDING_SHUTDOWN to DEAD
2020-11-04 14:18:39,938 INFO  [org.apa.kaf.str.KafkaStreams] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-client [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f] State transition from REBALANCING to ERROR
2020-11-04 14:18:39,939 ERROR [org.apa.kaf.str.KafkaStreams] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-client [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f] All stream threads have died. The instance will be in error state and should be closed.
2020-11-04 14:18:39,939 INFO  [org.apa.kaf.str.pro.int.StreamThread] (registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2) stream-thread [registry_id-0aafbcc9-8e3c-4635-83c0-60dab2f83d3f-StreamThread-2] Shutdown complete
EricWittmann commented 3 years ago

It seems we aren't sure what might cause this problem. :( Have you made any progress on your end, @ebbnflow ?

famarting commented 3 years ago

@ebbnflow could you share your complete docker compose file?

EricWittmann commented 3 years ago

Here are some responses I've gotten from internal Kafka experts, but they are still looking...

It looks like an issue that is marked as fixed in 2.5.0 and the log is from using 2.5.0. :(

I've not looked closely, but it looks simply like the producer couldn't get a PID (needed for transactions). The root cause for that will lie on the broker, rather than the client.

sroze commented 3 years ago

Did anybody manage to get around this TimeoutException: Timeout expired after 60000milliseconds while awaiting InitProducerId issue? I'm hitting the same things when using Kafka Streams with a AWS MSK 2.7.0.

EricWittmann commented 3 years ago

@alesj You were not able to reproduce this, correct? @famartinrh Is it possible to try AWS MSK?

alesj commented 3 years ago

Yeah, cannot reproduce it ...

On Tue, 16 Feb 2021 at 18:50, Eric Wittmann notifications@github.com wrote:

@alesj https://github.com/alesj You were not able to reproduce this, correct? @famartinrh https://github.com/famartinrh Is it possible to try AWS MSK?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Apicurio/apicurio-registry/issues/980#issuecomment-780008953, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACRA6BPKT6H6BBJB7CZTNTS7KV5VANCNFSM4TIX5Y7A .

famarting commented 3 years ago

I think it won't be possible for me to try AWS MSK, but I can try to test kafka version 2.7.0 which looks like to be the root cause of the issue

carlesarnal commented 2 years ago

Closing, as we no longer support the streams storage. @ebbnflow if you want to use Kafka in 2.x, I recommend you to take a look at the kafkasql storage variant.