Closed Scyta1e closed 9 years ago
One for @rajdavies probably?
This could be because by default - the AMQ-Broker (that Fabric8MQ controls) - has an in-memory message store.
Btw- Fabric8MQ will likely be replaced by a MaaS solution based on QPid dispatch router
Might be worth tweaking the camel microservices producer example to generate a lower rate of messages by default? All I did was deploy it and before I could get the consumer added it had blown ;-) Obviously hardly a fix but at least might make it through a demo? I'll tweak it and retest.
Happy for a PR of it fixes the issue.
closing as mq is being replaced with qpid
Running Fabric8 2.2.16 on native docker 1.6.2/OS Origin 1.0.3:
Run the fabric8mq-producer with the fabric8-consumer offline to generate a large number (1000+) messages on the TEST.FOO queue. Then start the consumer and it processes a few messages before the following errors appears in the FabricMQ logs:
SEVERE: Unhandled exception java.lang.IllegalStateException: No Data Handler at io.fabric8.mq.protocol.openwire.OpenWireReadStream.resume(OpenWireReadStream.java:93) at io.fabric8.mq.protocol.openwire.OpenWireTransport.resume(OpenWireTransport.java:204) at io.fabric8.mq.protocol.openwire.OpenWireTransport.resume(OpenWireTransport.java:49) at org.vertx.java.core.streams.Pump$1.handle(Pump.java:95) at org.vertx.java.core.streams.Pump$1.handle(Pump.java:93) at org.vertx.java.core.net.impl.DefaultNetSocket.callDrainHandler(DefaultNetSocket.java:284) at org.vertx.java.core.net.impl.DefaultNetSocket.handleInterestedOpsChanged(DefaultNetSocket.java:253) at org.vertx.java.core.net.impl.VertxHandler.channelWritabilityChanged(VertxHandler.java:75) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:391) at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:373) at io.netty.channel.DefaultChannelPipeline.fireChannelWritabilityChanged(DefaultChannelPipeline.java:802) at io.netty.channel.ChannelOutboundBuffer.decrementPendingOutboundBytes(ChannelOutboundBuffer.java:232) at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:921) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:370) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) at java.lang.Thread.run(Thread.java:745)
It doesn't recover without kicking over the Fabric8MQ component at which point it process a few more messages and then bombs again.