Closed bbdick closed 3 years ago
So. . . it appears that you are missing some custom (non-sleuth, non-stream) auto-configuration which is responsible for pie.rosettaPieGroup.errors.recoverer
bean
No bean named 'pie.rosettaPieGroup.errors.recoverer' available
So, that’s the main error. Because of that, the ApplicationContext quits and the reset is just a fallout from the original error.
Error creating bean with name 'org.springframework.integration.config.IdGeneratorConfigurer#0': Singleton bean creation not allowed while singletons of this factory are in destruction
So, please figure out where/why pie.rosettaPieGroup.errors.recoverer
is missing.
I'll close the issue then.
No that error was only on application exit with the work-around (downgrading spring-cloud-sleuth to 2.2.6). Please ignore that error as it is only a warning, I shouldn't have mentioned it which only clouds the original issue I am trying to report. Please check the exception stack at the top of this ticket, where the application won't even start up with spring-cloud-sleuth 2.2.7.RELEASE. Also this is only an issue with spring.cloud.stream.binder declared in the configuration. The application fails to start up after the line "Creating binder scl". If I don't use a spring.cloud.stream.binders, but rather just have the following:
spring.cloud.stream:
kafka.binder.brokers: "localhost"
Then the application still starts up using all the Hoxton.SR10 library versions. So the culprit is in creating the binder object. If you want a sample application I can provide one.
You must not be providing the full stack trace since there is absolutely nothing of value in the first stack trace. The second stack trace is already about shutting down and everything I explained earlier. So if you want additional help there are few things you can do.
Until we can reproduce it there is no issue - i hope you understand
Apologies for the lack of information. I have created a bare bone application at https://github.com/bbdick/kafkaBinderDemo/tree/develop. However, I am only able to duplicate the shutdown warnings with this bare bone application. In addition, the warning happens regardless of weather spring-cloud-starter-sleuth is on the classpath. So this is no longer a spring-cloud-sleuth issue. Please advice where to report this, perhaps back to spring-cloud?
@marcingrzejszczak I am reopening it and will take a look
@bbdick just an FYI, we have long deprecated annotation-based configuration model for stream, so the StreamListener, Input/Output, EnableBinding are soon to be gone completely. Please migrate to a functional approach as described in documentation
I just cloned and started your application and it started without any warnings or errors
2021-03-01 06:36:17.597 INFO 37081 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.5.1
2021-03-01 06:36:17.597 INFO 37081 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 0efa8fb0f4c73d92
2021-03-01 06:36:17.597 INFO 37081 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1614576977597
2021-03-01 06:36:17.606 INFO 37081 --- [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: -nrPO15FTSuJJaJFgnuahA
2021-03-01 06:36:17.607 INFO 37081 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 30000 ms.
2021-03-01 06:36:17.620 INFO 37081 --- [ main] o.s.c.s.m.DirectWithAttributesChannel : Channel 'kafkaBinderDemo.testOutput' has 1 subscriber(s).
2021-03-01 06:36:17.637 INFO 37081 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8088 (http) with context path '/v1'
2021-03-01 06:36:17.644 INFO 37081 --- [ main] c.e.k.KafkaBinderDemoApplication : Started KafkaBinderDemoApplication in 2.757 seconds (JVM running for 3.112)
What am I missing?
The warnings are only on application shutdown using the local profile (where stream binders are configured). i have also added a new branch, https://github.com/bbdick/kafkaBinderDemo/tree/demo/inputBinder, to show corresponding warning when @Input bindings are used in the application. The shutdown warning shows "Failed to stop bean 'inputBindingLifecycle'", followed by a stack trace.
I am closing it as I do not see any issues, neither on startup nor shutdown. Perhaps something stale in your maven repo that gets into the classpath. . . a simple refresh of the repo should clear it up. . . Anyway, here is the output (I've added AC.stop to test the shutdown)
. . . .
. . .
2021-03-02 08:32:43.822 INFO 62893 --- [ main] o.s.c.stream.binder.BinderErrorChannel : Channel 'testInput.anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d.errors' has 1 subscriber(s).
2021-03-02 08:32:43.822 INFO 62893 --- [ main] o.s.c.stream.binder.BinderErrorChannel : Channel 'testInput.anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d.errors' has 0 subscriber(s).
2021-03-02 08:32:43.822 INFO 62893 --- [ main] o.s.c.stream.binder.BinderErrorChannel : Channel 'testInput.anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d.errors' has 1 subscriber(s).
2021-03-02 08:32:43.822 INFO 62893 --- [ main] o.s.c.stream.binder.BinderErrorChannel : Channel 'testInput.anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d.errors' has 2 subscriber(s).
2021-03-02 08:32:43.836 INFO 62893 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 100
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.dns.lookup = default
client.id =
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
2021-03-02 08:32:43.840 INFO 62893 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.5.1
2021-03-02 08:32:43.840 INFO 62893 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 0efa8fb0f4c73d92
2021-03-02 08:32:43.840 INFO 62893 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1614670363840
2021-03-02 08:32:43.841 INFO 62893 --- [ main] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] Subscribed to topic(s): testInput
2021-03-02 08:32:43.843 INFO 62893 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService
2021-03-02 08:32:43.846 INFO 62893 --- [ main] s.i.k.i.KafkaMessageDrivenChannelAdapter : started org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter@7885776b
2021-03-02 08:32:43.852 INFO 62893 --- [container-0-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] Cluster ID: -nrPO15FTSuJJaJFgnuahA
2021-03-02 08:32:43.853 INFO 62893 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] Discovered group coordinator localhost:9092 (id: 2147483647 rack: null)
2021-03-02 08:32:43.855 INFO 62893 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] (Re-)joining group
2021-03-02 08:32:43.865 INFO 62893 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8088 (http) with context path '/v1'
2021-03-02 08:32:43.867 INFO 62893 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
2021-03-02 08:32:43.867 INFO 62893 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] (Re-)joining group
2021-03-02 08:32:43.871 INFO 62893 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] Finished assignment for group at generation 1: {consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2-4a52371d-72ce-454d-bf68-79c432188ed4=Assignment(partitions=[testInput-0])}
2021-03-02 08:32:43.876 INFO 62893 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] Successfully joined group with generation 1
2021-03-02 08:32:43.879 INFO 62893 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] Adding newly assigned partitions: testInput-0
2021-03-02 08:32:43.883 INFO 62893 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] Found no committed offset for partition testInput-0
2021-03-02 08:32:43.885 INFO 62893 --- [ main] c.e.k.KafkaBinderDemoApplication : Started KafkaBinderDemoApplication in 3.04 seconds (JVM running for 3.391)
2021-03-02 08:32:43.888 INFO 62893 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] Found no committed offset for partition testInput-0
2021-03-02 08:32:43.899 INFO 62893 --- [container-0-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] Resetting offset for partition testInput-0 to offset 0.
2021-03-02 08:32:43.899 INFO 62893 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2021-03-02 08:32:43.906 INFO 62893 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d: partitions assigned: [testInput-0]
2021-03-02 08:32:43.911 INFO 62893 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] Revoke previously assigned partitions testInput-0
2021-03-02 08:32:43.911 INFO 62893 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d: partitions revoked: [testInput-0]
2021-03-02 08:32:43.912 INFO 62893 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] Member consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2-4a52371d-72ce-454d-bf68-79c432188ed4 sending LeaveGroup request to coordinator localhost:9092 (id: 2147483647 rack: null) due to the consumer unsubscribed from all topics
2021-03-02 08:32:43.913 INFO 62893 --- [container-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d-2, groupId=anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d] Unsubscribed all topics or patterns and assigned partitions
2021-03-02 08:32:43.913 INFO 62893 --- [container-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2021-03-02 08:32:43.919 INFO 62893 --- [container-0-C-1] essageListenerContainer$ListenerConsumer : anonymous.ce62b2e7-8ce4-4051-8c9e-98ef2807bb8d: Consumer stopped
2021-03-02 08:32:43.920 INFO 62893 --- [ main] s.i.k.i.KafkaMessageDrivenChannelAdapter : stopped org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter@7885776b
2021-03-02 08:32:43.921 INFO 62893 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Removing {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2021-03-02 08:32:43.921 INFO 62893 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'kafkaBinderDemo.errorChannel' has 0 subscriber(s).
2021-03-02 08:32:43.921 INFO 62893 --- [ main] o.s.i.endpoint.EventDrivenConsumer : stopped bean '_org.springframework.integration.errorLogger'
2021-03-02 08:32:43.925 INFO 62893 --- [extShutdownHook] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService 'taskScheduler'
2021-03-02 08:32:43.926 INFO 62893 --- [extShutdownHook] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
As mentioned the warning occurs only when spring.cloud.streams.binders are defined, which I did in the local profile only. From your log I see that you are running the default profile, which I included for comparison purpose with the local profile. Please try again specifying the local profile:
java -Dspring.profiles.active=local -jar target/kafkaBinderDemo-0.0.1-SNAPSHOT.jar
The subscriber topic on the local profile is "testInputDev", see this line. In your log you have "testInput" which comes from default profile.
2021-03-02 07:48:57.580 INFO 5893 --- [ main] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-demoGroup-2, groupId=demoGroup] Subscribed to topic(s): testInputDev
I have refreshed my maven repo, then run the local profile, and I am still seeing warnings on shutdown on both the input and output bindings.
2021-03-02 07:49:53.482 INFO 5893 --- [container-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2021-03-02 07:49:53.486 WARN 5893 --- [extShutdownHook] o.s.c.support.DefaultLifecycleProcessor : Failed to stop bean 'outerContext'
org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'org.springframework.integration.config.IdGeneratorConfigurer#0': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:220) ~[spring-beans-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:322) ~[spring-beans-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:207) ~[spring-beans-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.event.AbstractApplicationEventMulticaster.retrieveApplicationListeners(AbstractApplicationEventMulticaster.java:247) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.event.AbstractApplicationEventMulticaster.getApplicationListeners(AbstractApplicationEventMulticaster.java:204) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:134) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:404) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:361) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.stop(AbstractApplicationContext.java:1371) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.doStop(DefaultLifecycleProcessor.java:251) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.access$300(DefaultLifecycleProcessor.java:53) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.stop(DefaultLifecycleProcessor.java:377) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.stopBeans(DefaultLifecycleProcessor.java:210) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.onClose(DefaultLifecycleProcessor.java:128) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:1022) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.AbstractApplicationContext$1.run(AbstractApplicationContext.java:949) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
2021-03-02 07:49:53.492 INFO 5893 --- [container-0-C-1] essageListenerContainer$ListenerConsumer : demoGroup: Consumer stopped
2021-03-02 07:49:53.494 INFO 5893 --- [extShutdownHook] s.i.k.i.KafkaMessageDrivenChannelAdapter : stopped org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter@20011bf
2021-03-02 07:49:53.496 WARN 5893 --- [extShutdownHook] o.s.c.support.DefaultLifecycleProcessor : Failed to stop bean 'inputBindingLifecycle'
org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'testInputDev.demoGroup.errors.bridge': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:220) ~[spring-beans-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:322) ~[spring-beans-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:207) ~[spring-beans-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1115) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.destroyErrorInfrastructure(AbstractMessageChannelBinder.java:796) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.access$300(AbstractMessageChannelBinder.java:91) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$2.afterUnbind(AbstractMessageChannelBinder.java:444) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
at org.springframework.cloud.stream.binder.DefaultBinding.unbind(DefaultBinding.java:176) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
at org.springframework.cloud.stream.binding.BindingService.unbindConsumers(BindingService.java:351) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
at org.springframework.cloud.stream.binding.AbstractBindableProxyFactory.unbindInputs(AbstractBindableProxyFactory.java:156) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
at org.springframework.cloud.stream.binding.InputBindingLifecycle.doStopWithBindable(InputBindingLifecycle.java:66) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
at java.base/java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608) ~[na:na]
at org.springframework.cloud.stream.binding.AbstractBindingLifecycle.stop(AbstractBindingLifecycle.java:68) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
at org.springframework.cloud.stream.binding.InputBindingLifecycle.stop(InputBindingLifecycle.java:34) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
at org.springframework.cloud.stream.binding.AbstractBindingLifecycle.stop(AbstractBindingLifecycle.java:85) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
at org.springframework.cloud.stream.binding.InputBindingLifecycle.stop(InputBindingLifecycle.java:34) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.doStop(DefaultLifecycleProcessor.java:238) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.access$300(DefaultLifecycleProcessor.java:53) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.stop(DefaultLifecycleProcessor.java:377) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.stopBeans(DefaultLifecycleProcessor.java:210) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.stop(DefaultLifecycleProcessor.java:116) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.stop(AbstractApplicationContext.java:1370) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.doStop(DefaultLifecycleProcessor.java:251) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.access$300(DefaultLifecycleProcessor.java:53) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.stop(DefaultLifecycleProcessor.java:377) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.stopBeans(DefaultLifecycleProcessor.java:210) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.onClose(DefaultLifecycleProcessor.java:128) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:1022) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
at org.springframework.context.support.AbstractApplicationContext$1.run(AbstractApplicationContext.java:949) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
2021-03-02 07:50:23.500 INFO 5893 --- [extShutdownHook] o.s.c.support.DefaultLifecycleProcessor : Failed to shut down 1 bean with phase value 2147482647 within timeout of 30000ms: [inputBindingLifecycle]
2021-03-02 07:50:23.503 INFO 5893 --- [extShutdownHook] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
Thanks for your time I appreciate your help.
Springboot version 2.3.9.RELEASE SpringCloud: Hoxton.SR10 Kafka server: 5.2.1
Received the following error when starting the application. The application chokes on creating spring.cloud.stream.binder.
Here's the configuration yml, for demo purpose the two binders both point to the same kafka broker but in prod we have different kafka brokers servers:
The channel outputRequest is created using spring cloud stream @Output anno:
Our application uses both spring-cloud-sleuth and spring-cloud-stream. After some digging around the problem seems to stem from spring-cloud-context 2.2.7.RELEASE, which is a dependency brought in from spring-cloud-starter-sleuth. So when we downgrade spring-cloud-sleuth dependencies from 2.2.7.RELEASE (default from Hoxton.SR10) to spring-cloud-sleuth dependencies 2.2.6.RELEASE, then the application is able to start.
It is worth mentioning that even with this work around, when shutting down the application, I am seeing intermittent warning when the kafka binder is @Output binder:
...and when the kafka binder is an @Input binder: