Netflix / Turbine

SSE Stream Aggregator
Apache License 2.0
835 stars 256 forks source link

Turbine problem with two streams from two applications #95

Open ulisses79 opened 8 years ago

ulisses79 commented 8 years ago

Please help me on that issue. I have two applications running on the same two hosts. Streams are different for these apps due to the fact that they have different set of HystrixCommands. Each stream can be observed separately. This is my config file: turbine.aggregator.clusterConfig=news,weather

turbine.ConfigPropertyBasedDiscovery.urls.instances=two_hosts turbine.ConfigPropertyBasedDiscovery.news.instances=the_same_two_hosts turbine.instanceUrlSuffix.news=/news/hystrix.stream turbine.instanceUrlSuffix.urls=/weather/hystrix.stream

Is it correctly constructed?

When in dashboard I enter the first cluster then I can see metrics. It does not work for the second cluster. When I change the order of clusters in theconfig file then it always works for the first one.

Any idea of what I am doing wrong? Below is the stacktrace:

ClientAbortException: java.net.SocketException: Broken pipe at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:330) at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:296) at org.apache.catalina.connector.Response.flushBuffer(Response.java:549) at org.apache.catalina.connector.ResponseFacade.flushBuffer(ResponseFacade.java:279) at com.netflix.turbine.streaming.servlet.SynchronizedHttpServletResponse.flushBuffer(SynchronizedHttpServletResponse.java:79) at com.netflix.turbine.streaming.servlet.TurbineStreamServlet$ServletStreamHandler.noData(TurbineStreamServlet.java:226) at com.netflix.turbine.streaming.TurbineStreamingConnection.waitOnConnection(TurbineStreamingConnection.java:216) at com.netflix.turbine.streaming.servlet.TurbineStreamServlet.streamFromCluster(TurbineStreamServlet.java:136) at com.netflix.turbine.streaming.servlet.TurbineStreamServlet.doGet(TurbineStreamServlet.java:98) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:662) Caused by: java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:756) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:448) at org.apache.coyote.http11.InternalOutputBuffer.flush(InternalOutputBuffer.java:318) at org.apache.coyote.http11.Http11Processor.action(Http11Processor.java:985) at org.apache.coyote.Response.action(Response.java:183) at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:325) ... 22 more

jmaasing commented 8 years ago

That is by design, however it is a bad design :-( I am having a similar issue. I use a config where I have a cluster with two hosts - works fine. However, I also have a use case where I want the streams from each of the two hosts separately. So I set up three 'clusters' in the configuration. One cluster with the aggregation of the two hosts and two fake clusters with one host each. But only the first configuration for each host works. This used to work in turbine 0.4 but was changed for 1.0 on the faulty assumption that a host can only participate in one cluster.

jmaasing commented 8 years ago

Related to #88