wso2 / product-ei

An open source, a high-performance hybrid integration platform that allows developers quick integration with any application, data, or system.
https://wso2.com/integration/
Apache License 2.0
375 stars 280 forks source link

[Intermittent] One analytic node gets stopped automatically with an exception #4302

Closed sachithKay closed 4 years ago

sachithKay commented 5 years ago

Description: Following exception was seen in one analytic node after the steps mentioned in issue MB-1785. Configs are attached (the issue was seen in node 192.168.1.40)

TID: [-1] [] [2016-09-21 07:23:17,770]  INFO {org.wso2.carbon.event.processor.manager.core.internal.EventHandler} -  CEP sync publisher initiated to Member '192.168.1.41:11225' {org.wso2.carbon.event.processor.manager.core.internal.EventHandler}
TID: [-1] [] [2016-09-21 07:23:24,313]  INFO {org.wso2.carbon.analytics.dataservice.core.indexing.IndexNodeCoordinator} -  Indexing Initialized: CLUSTERED {0={47481063-fa23-4b48-9d99-b2a3fc518550=Member [192.168.1.40]:4010 this, 0543a174-2bd7-4130-a762-60f4ad43cbaa=Member [192.168.1.41]:4020}, 1={47481063-fa23-4b48-9d99-b2a3fc518550=Member [192.168.1.40]:4010 this, 0543a174-2bd7-4130-a762-60f4ad43cbaa=Member [192.168.1.41]:4020}, 2={47481063-fa23-4b48-9d99-b2a3fc518550=Member [192.168.1.40]:4010 this, 0543a174-2bd7-4130-a762-60f4ad43cbaa=Member [192.168.1.41]:4020}, 3={47481063-fa23-4b48-9d99-b2a3fc518550=Member [192.168.1.40]:4010 this, 0543a174-2bd7-4130-a762-60f4ad43cbaa=Member [192.168.1.41]:4020}, 4={47481063-fa23-4b48-9d99-b2a3fc518550=Member [192.168.1.40]:4010 this, 0543a174-2bd7-4130-a762-60f4ad43cbaa=Member [192.168.1.41]:4020}, 5={47481063-fa23-4b48-9d99-b2a3fc518550=Member [192.168.1.40]:4010 this, 0543a174-2bd7-4130-a762-60f4ad43cbaa=Member [192.168.1.41]:4020}} | Current Node Indexing: Yes {org.wso2.carbon.analytics.dataservice.core.indexing.IndexNodeCoordinator}
TID: [-1] [] [2016-09-21 07:23:35,597]  INFO {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -  Member change, remove: false {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor}
TID: [-1] [] [2016-09-21 07:23:35,607]  INFO {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -  Analytics worker updated, total count: 2 {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor}
TID: [-1] [] [2016-09-21 07:23:35,619]  INFO {org.wso2.carbon.analytics.spark.core.deploy.CheckElectedLeaderExecutionCall} -  Check Elected Leader Request: true {org.wso2.carbon.analytics.spark.core.deploy.CheckElectedLeaderExecutionCall}
TID: [-1] [] [2016-09-21 07:23:35,676]  INFO {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -  Started SPARK WORKER in 192.168.1.40:11001 with webUI port 11501 with Masters [spark://192.168.1.41:7078, spark://192.168.1.40:7078] {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor}
TID: [-1] [] [2016-09-21 07:23:36,097]  INFO {Remoting} -  Starting remoting {Remoting}
TID: [-1] [] [2016-09-21 07:23:36,295]  INFO {Remoting} -  Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.1.40:13002] {Remoting}
TID: [-1] [] [2016-09-21 07:23:37,676]  INFO {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -  Started Spark CLIENT in the cluster pointing to MASTER spark://192.168.1.41:7078,192.168.1.40:7078 with the application name : CarbonAnalytics and UI port : 4041 {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor}
TID: [-1] [] [2016-09-21 07:24:50,201]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1] [] [2016-09-21 07:35:41,718]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 07:36:59,843]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1234] [] [2016-09-21 07:50:31,559]  INFO {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} -  'MBadmin320@carbon.super [-1234]' logged in at [2016-09-21 07:50:31,558-0400] {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil}
TID: [-1234] [] [2016-09-21 07:50:34,047]  INFO {org.apache.shindig.gadgets.http.BasicHttpFetcher} -  The following exception occurred when fetching https://192.168.1.40:9444/shindig/rpc?method=system.listMethods&st=-1:-1:0:*:0:0:default:1474458643: 155 ms elapsed. {org.apache.shindig.gadgets.http.BasicHttpFetcher}
TID: [-1234] [] [2016-09-21 07:50:34,049]  INFO {org.apache.shindig.gadgets.http.BasicHttpFetcher} -   {org.apache.shindig.gadgets.http.BasicHttpFetcher}
javax.net.ssl.SSLException: hostname in certificate didn't match: <192.168.1.40> != <localhost>
    at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:238)
    at org.apache.http.conn.ssl.BrowserCompatHostnameVerifier.verify(BrowserCompatHostnameVerifier.java:54)
    at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:159)
    at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:140)
    at org.apache.http.conn.ssl.SSLSocketFactory.verifyHostname(SSLSocketFactory.java:561)
    at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:536)
    at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:403)
    at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
    at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:144)
    at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:131)
    at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
    at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
    at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:115)
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
    at org.apache.shindig.gadgets.http.BasicHttpFetcher.fetch(BasicHttpFetcher.java:359)
    at org.apache.shindig.gadgets.render.DefaultServiceFetcher.retrieveServices(DefaultServiceFetcher.java:164)
    at org.apache.shindig.gadgets.render.DefaultServiceFetcher.getServicesForContainer(DefaultServiceFetcher.java:135)
    at org.apache.shindig.gadgets.render.DefaultRpcServiceLookup$1.call(DefaultRpcServiceLookup.java:80)
    at org.apache.shindig.gadgets.render.DefaultRpcServiceLookup$1.call(DefaultRpcServiceLookup.java:78)
    at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4742)
    at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527)
    at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2319)
    at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2282)
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2197)
    at com.google.common.cache.LocalCache.get(LocalCache.java:3937)
    at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4739)
    at org.apache.shindig.gadgets.render.DefaultRpcServiceLookup.getServicesFor(DefaultRpcServiceLookup.java:77)
    at org.apache.shindig.gadgets.config.OsapiServicesConfigContributor.addServicesConfig(OsapiServicesConfigContributor.java:69)
    at org.apache.shindig.gadgets.config.OsapiServicesConfigContributor.contribute(OsapiServicesConfigContributor.java:58)
    at org.apache.shindig.gadgets.config.DefaultConfigProcessor.contribute(DefaultConfigProcessor.java:84)
    at org.apache.shindig.gadgets.config.DefaultConfigProcessor.getConfig(DefaultConfigProcessor.java:74)
    at org.apache.shindig.gadgets.js.ConfigInjectionProcessor.process(ConfigInjectionProcessor.java:82)
    at org.apache.shindig.gadgets.js.DefaultJsProcessorRegistry.process(DefaultJsProcessorRegistry.java:56)
    at org.apache.shindig.gadgets.js.DefaultJsServingPipeline.execute(DefaultJsServingPipeline.java:41)
    at org.apache.shindig.gadgets.servlet.JsServlet.doGet(JsServlet.java:94)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:624)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
    at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
    at org.apache.shindig.gadgets.servlet.ETagFilter.doFilter(ETagFilter.java:56)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
    at org.apache.shindig.common.servlet.HostFilter.doFilter(HostFilter.java:38)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
    at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:120)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:218)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
    at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99)
    at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
    at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:57)
    at org.wso2.carbon.event.receiver.core.internal.tenantmgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:48)
    at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
    at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62)
    at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:159)
    at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:956)
    at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:442)
    at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1082)
    at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:623)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1756)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1715)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    at java.lang.Thread.run(Thread.java:745)
TID: [-1234] [] [2016-09-21 07:50:34,052] ERROR {org.apache.shindig.gadgets.render.DefaultServiceFetcher} -  Services methods from the https://192.168.1.40:9444/shindig/rpc endpoint could not be fetched. The following error occurred: javax.net.ssl.SSLException: hostname in certificate didn't match: <192.168.1.40> != <localhost>. {org.apache.shindig.gadgets.render.DefaultServiceFetcher}
TID: [-1234] [] [2016-09-21 07:51:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1] [] [2016-09-21 07:55:49,936]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1] [] [2016-09-21 08:06:41,445]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 08:06:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1234] [] [2016-09-21 08:21:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1] [] [2016-09-21 08:26:49,948]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 08:36:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1234] [] [2016-09-21 08:37:41,510]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 08:51:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1234] [] [2016-09-21 08:57:50,040]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 09:06:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1234] [] [2016-09-21 09:08:41,453]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 09:21:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1234] [] [2016-09-21 09:28:49,971]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 09:36:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1234] [] [2016-09-21 09:39:41,472]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 09:51:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1234] [] [2016-09-21 09:59:50,063]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 10:06:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1234] [] [2016-09-21 10:09:44,160]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 10:21:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1234] [] [2016-09-21 10:30:50,007]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 10:36:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1234] [] [2016-09-21 10:40:47,810]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 10:51:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1234] [] [2016-09-21 11:01:49,980]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 11:06:59,842]  INFO {org.wso2.siddhi.core.util.snapshot.SnapshotService} -  Snapshot taken of Execution Plan 'MbAnalytics-ExecutionPlan-Statistic' {org.wso2.siddhi.core.util.snapshot.SnapshotService}
TID: [-1234] [] [2016-09-21 11:10:51,339]  INFO {org.wso2.carbon.databridge.core.DataBridge} -  user MBadmin320 connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [-1234] [] [2016-09-21 11:15:13,893]  INFO {org.wso2.carbon.core.init.CarbonServerManager} -  Shutdown hook triggered.... {org.wso2.carbon.core.init.CarbonServerManager}
TID: [-1234] [] [2016-09-21 11:15:13,909]  INFO {org.wso2.carbon.core.init.CarbonServerManager} -  Gracefully shutting down Analytics for WSO2 MB... {org.wso2.carbon.core.init.CarbonServerManager}
TID: [-1234] [] [2016-09-21 11:15:13,928]  INFO {org.wso2.carbon.core.ServerManagement} -  Starting to switch to maintenance mode... {org.wso2.carbon.core.ServerManagement}
TID: [-1234] [] [2016-09-21 11:15:13,970]  INFO {org.wso2.carbon.core.ServerManagement} -  Stopped all transport listeners {org.wso2.carbon.core.ServerManagement}
TID: [-1234] [] [2016-09-21 11:15:13,970]  INFO {org.wso2.carbon.core.ServerManagement} -  Waiting for request service completion... {org.wso2.carbon.core.ServerManagement}
TID: [-1234] [] [2016-09-21 11:15:13,991]  INFO {org.wso2.carbon.core.ServerManagement} -  All requests have been served. {org.wso2.carbon.core.ServerManagement}
TID: [-1234] [] [2016-09-21 11:15:13,992]  INFO {org.wso2.carbon.core.ServerManagement} -  Waiting for deployment completion... {org.wso2.carbon.core.ServerManagement}
TID: [-1234] [] [2016-09-21 11:15:14,010]  INFO {org.springframework.web.context.support.XmlWebApplicationContext} -  Closing Root WebApplicationContext: startup date [Wed Sep 21 07:22:01 EDT 2016]; root of context hierarchy {org.springframework.web.context.support.XmlWebApplicationContext}
TID: [-1234] [] [2016-09-21 11:15:14,021]  INFO {org.springframework.beans.factory.support.DefaultListableBeanFactory} -  Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1bebf24: defining beans [cxf,org.apache.cxf.bus.spring.BusWiringBeanFactoryPostProcessor,org.apache.cxf.bus.spring.Jsr250BeanPostProcessor,org.apache.cxf.bus.spring.BusExtensionPostProcessor,analyticsServices,analyticsBean,analyticsExceptionMapper,analyticsIndexExceptionMapper,analyticsTableNotFoundExceptionMapper,analyticsTimeoutExceptionMapper,jacksonJsonProvider,httpHeadersOutInterceptor]; root of factory hierarchy {org.springframework.beans.factory.support.DefaultListableBeanFactory}
TID: [-1234] [] [2016-09-21 11:15:14,093]  INFO {org.wso2.carbon.webapp.mgt.WebApplication} -  Unloaded webapp: StandardEngine[Catalina].StandardHost[localhost].StandardContext[/analytics] {org.wso2.carbon.webapp.mgt.WebApplication}
TID: [-1234] [] [2016-09-21 11:15:14,179]  INFO {org.wso2.carbon.webapp.mgt.WebApplication} -  Unloaded webapp: StandardEngine[Catalina].StandardHost[localhost].StandardContext[/shindig] {org.wso2.carbon.webapp.mgt.WebApplication}
TID: [-1234] [] [2016-09-21 11:15:14,197]  INFO {org.wso2.carbon.webapp.mgt.WebApplication} -  Unloaded webapp: StandardEngine[Catalina].StandardHost[localhost].StandardContext[/outputwebsocket] {org.wso2.carbon.webapp.mgt.WebApplication}
TID: [-1234] [] [2016-09-21 11:15:14,209]  INFO {org.wso2.carbon.webapp.mgt.WebApplication} -  Unloaded webapp: StandardEngine[Catalina].StandardHost[localhost].StandardContext[/inputwebsocket] {org.wso2.carbon.webapp.mgt.WebApplication}
TID: [-1234] [] [2016-09-21 11:15:14,224]  INFO {org.wso2.carbon.webapp.mgt.WebApplication} -  Unloaded webapp: StandardEngine[Catalina].StandardHost[localhost].StandardContext[/portal] {org.wso2.carbon.webapp.mgt.WebApplication}
TID: [-1234] [] [2016-09-21 11:15:14,224]  INFO {org.wso2.carbon.core.ServerManagement} -  All deployment tasks have been completed. {org.wso2.carbon.core.ServerManagement}
TID: [-1234] [] [2016-09-21 11:15:14,224]  INFO {org.wso2.carbon.core.ServerManagement} -  Waiting for server task completion... {org.wso2.carbon.core.ServerManagement}
TID: [-1234] [] [2016-09-21 11:15:14,251]  INFO {org.wso2.carbon.registry.core.internal.RegistryCoreServiceComponent} -  Writing logs  {org.wso2.carbon.registry.core.internal.RegistryCoreServiceComponent}
TID: [-1234] [] [2016-09-21 11:15:14,256]  INFO {org.wso2.carbon.core.ServerManagement} -  All server tasks have been completed. {org.wso2.carbon.core.ServerManagement}
TID: [-1234] [] [2016-09-21 11:15:14,256]  INFO {org.wso2.carbon.core.init.CarbonServerManager} -  Shutting down Analytics for WSO2 MB... {org.wso2.carbon.core.init.CarbonServerManager}
TID: [-1234] [] [2016-09-21 11:15:14,256]  INFO {org.wso2.carbon.core.init.CarbonServerManager} -  Shutting down OSGi framework... {org.wso2.carbon.core.init.CarbonServerManager}
TID: [-1234] [] [2016-09-21 11:15:14,318]  INFO {org.wso2.carbon.databridge.receiver.binary.internal.BinaryDataReceiverServiceComponent} -  Binary Data Receiver server shutting down... {org.wso2.carbon.databridge.receiver.binary.internal.BinaryDataReceiverServiceComponent}
TID: [-1234] [] [2016-09-21 11:15:14,318]  INFO {org.wso2.carbon.databridge.receiver.binary.internal.BinaryDataReceiver} -  Stopping Binary Server.. {org.wso2.carbon.databridge.receiver.binary.internal.BinaryDataReceiver}
TID: [-1234] [] [2016-09-21 11:15:14,321]  INFO {org.wso2.carbon.databridge.receiver.thrift.internal.ThriftDataReceiverDS} -  Thrift server shutting down... {org.wso2.carbon.databridge.receiver.thrift.internal.ThriftDataReceiverDS}
TID: [-1234] [] [2016-09-21 11:15:14,349]  INFO {org.wso2.carbon.event.receiver.core.EventReceiverDeployer} -  Event Receiver undeployed successfully: Metrics-Receiver-Wso2event-Histogram.xml {org.wso2.carbon.event.receiver.core.EventReceiverDeployer}
TID: [-1234] [] [2016-09-21 11:15:14,353]  INFO {org.wso2.carbon.event.receiver.core.EventReceiverDeployer} -  Event receiver deployment held back and in inactive state: Metrics-Receiver-Wso2event-Histogram.xml, waiting for Input Event Adapter dependency: wso2event {org.wso2.carbon.event.receiver.core.EventReceiverDeployer}
TID: [-1234] [] [2016-09-21 11:15:14,353]  INFO {org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService} -  Event receiver : Metrics-Receiver-Wso2event-Histogram in inactive state because dependency could not be found: wso2event {org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService}
TID: [-1234] [] [2016-09-21 11:15:14,353]  INFO {org.wso2.carbon.event.receiver.core.EventReceiverDeployer} -  Event Receiver undeployed successfully: Metrics-Receiver-Wso2event-Meter.xml {org.wso2.carbon.event.receiver.core.EventReceiverDeployer}
TID: [-1234] [] [2016-09-21 11:15:14,355]  INFO {org.wso2.carbon.event.receiver.core.EventReceiverDeployer} -  Event receiver deployment held back and in inactive state: Metrics-Receiver-Wso2event-Meter.xml, waiting for Input Event Adapter dependency: wso2event {org.wso2.carbon.event.receiver.core.EventReceiverDeployer}
TID: [-1234] [] [2016-09-21 11:15:14,355]  INFO {org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService} -  Event receiver : Metrics-Receiver-Wso2event-Meter in inactive state because dependency could not be found: wso2event {org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService}
TID: [-1234] [] [2016-09-21 11:15:14,355]  INFO {org.wso2.carbon.event.receiver.core.EventReceiverDeployer} -  Event Receiver undeployed successfully: Metrics-Receiver-Wso2event-Timer.xml {org.wso2.carbon.event.receiver.core.EventReceiverDeployer}
TID: [-1234] [] [2016-09-21 11:15:14,356]  INFO {org.wso2.carbon.event.receiver.core.EventReceiverDeployer} -  Event receiver deployment held back and in inactive state: Metrics-Receiver-Wso2event-Timer.xml, waiting for Input Event Adapter dependency: wso2event {org.wso2.carbon.event.receiver.core.EventReceiverDeployer}
TID: [-1234] [] [2016-09-21 11:15:14,356]  INFO {org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService} -  Event receiver : Metrics-Receiver-Wso2event-Timer in inactive state because dependency could not be found: wso2event {org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService}
TID: [-1234] [] [2016-09-21 11:15:14,356]  INFO {org.wso2.carbon.event.receiver.core.EventReceiverDeployer} -  Event Receiver undeployed successfully: MbAnalytics-Receiver-Wso2event-LogAnalyzer.xml {org.wso2.carbon.event.receiver.core.EventReceiverDeployer}
TID: [-1234] [] [2016-09-21 11:15:14,358]  INFO {org.wso2.carbon.event.receiver.core.EventReceiverDeployer} -  Event receiver deployment held back and in inactive state: MbAnalytics-Receiver-Wso2event-LogAnalyzer.xml, waiting for Input Event Adapter dependency: wso2event {org.wso2.carbon.event.receiver.core.EventReceiverDeployer}
TID: [-1234] [] [2016-09-21 11:15:14,358]  INFO {org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService} -  Event receiver : MbAnalytics-Receiver-Wso2event-LogAnalyzer in inactive state because dependency could not be found: wso2event {org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService}
TID: [-1234] [] [2016-09-21 11:15:14,358]  INFO {org.wso2.carbon.event.receiver.core.EventReceiverDeployer} -  Event Receiver undeployed successfully: Metrics-Receiver-Wso2event-Gauge.xml {org.wso2.carbon.event.receiver.core.EventReceiverDeployer}
TID: [-1234] [] [2016-09-21 11:15:14,359]  INFO {org.wso2.carbon.event.receiver.core.EventReceiverDeployer} -  Event receiver deployment held back and in inactive state: Metrics-Receiver-Wso2event-Gauge.xml, waiting for Input Event Adapter dependency: wso2event {org.wso2.carbon.event.receiver.core.EventReceiverDeployer}
TID: [-1234] [] [2016-09-21 11:15:14,359]  INFO {org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService} -  Event receiver : Metrics-Receiver-Wso2event-Gauge in inactive state because dependency could not be found: wso2event {org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService}
TID: [-1234] [] [2016-09-21 11:15:14,360]  INFO {org.wso2.carbon.event.receiver.core.EventReceiverDeployer} -  Event Receiver undeployed successfully: Metrics-Receiver-Wso2event-Counter.xml {org.wso2.carbon.event.receiver.core.EventReceiverDeployer}
TID: [-1234] [] [2016-09-21 11:15:14,361]  INFO {org.wso2.carbon.event.receiver.core.EventReceiverDeployer} -  Event receiver deployment held back and in inactive state: Metrics-Receiver-Wso2event-Counter.xml, waiting for Input Event Adapter dependency: wso2event {org.wso2.carbon.event.receiver.core.EventReceiverDeployer}
TID: [-1234] [] [2016-09-21 11:15:14,361]  INFO {org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService} -  Event receiver : Metrics-Receiver-Wso2event-Counter in inactive state because dependency could not be found: wso2event {org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService}
TID: [-1234] [] [2016-09-21 11:15:14,446]  INFO {org.wso2.carbon.event.processor.core.internal.CarbonEventProcessorService} -  Successfully shutdown ExecutionPlans {org.wso2.carbon.event.processor.core.internal.CarbonEventProcessorService}
TID: [-1234] [] [2016-09-21 11:15:14,451]  INFO {org.wso2.carbon.core.init.CarbonServerManager} -  Stopping CarbonServerManager... {org.wso2.carbon.core.init.CarbonServerManager}
TID: [-1] [] [2016-09-21 11:15:15,055] ERROR {org.apache.spark.scheduler.TaskSchedulerImpl} -  Lost executor 0 on localhost: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages. {org.apache.spark.scheduler.TaskSchedulerImpl}
TID: [-1] [] [2016-09-21 11:15:15,111]  WARN {org.apache.spark.scheduler.TaskSetManager} -  Lost task 0.0 in stage 559864.0 (TID 393962, localhost): ExecutorLostFailure (executor 0 exited caused by one of the running tasks) Reason: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages. {org.apache.spark.scheduler.TaskSetManager}
TID: [-1] [] [2016-09-21 11:15:15,155] ERROR {org.apache.spark.deploy.worker.Worker} -  Failed to launch executor app-20160921072336-0000/2 for CarbonAnalytics. {org.apache.spark.deploy.worker.Worker}
java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
    at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:246)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:191)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:180)
    at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:75)
    at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:472)
    at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
    at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
    at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
    at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
TID: [-1] [] [2016-09-21 11:15:15,155]  WARN {org.apache.spark.deploy.master.Master} -  Got status update for unknown executor app-20160921072336-0000/0 {org.apache.spark.deploy.master.Master}
TID: [-1] [] [2016-09-21 11:15:15,165] ERROR {org.apache.spark.deploy.worker.Worker} -  Failed to launch executor app-20160921072336-0000/3 for CarbonAnalytics. {org.apache.spark.deploy.worker.Worker}
java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
    at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:246)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:191)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:180)
    at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:75)
    at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:472)
    at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
    at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
    at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
    at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
TID: [-1] [] [2016-09-21 11:15:15,173] ERROR {org.apache.spark.deploy.worker.Worker} -  Failed to launch executor app-20160921072336-0000/4 for CarbonAnalytics. {org.apache.spark.deploy.worker.Worker}
java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
    at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:246)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:191)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:180)
    at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:75)
    at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:472)
    at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
    at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
    at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
    at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
TID: [-1] [] [2016-09-21 11:15:15,182] ERROR {org.apache.spark.deploy.worker.Worker} -  Failed to launch executor app-20160921072336-0000/5 for CarbonAnalytics. {org.apache.spark.deploy.worker.Worker}
java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
    at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:246)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:191)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:180)
    at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:75)
    at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:472)
    at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
    at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
    at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
    at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
TID: [-1] [] [2016-09-21 11:15:15,188] ERROR {org.apache.spark.deploy.worker.Worker} -  Failed to launch executor app-20160921072336-0000/6 for CarbonAnalytics. {org.apache.spark.deploy.worker.Worker}
java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
    at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:246)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:191)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:180)
    at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:75)
    at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:472)
    at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
    at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
    at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
    at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
TID: [-1] [] [2016-09-21 11:15:15,211] ERROR {org.apache.spark.deploy.worker.Worker} -  Failed to launch executor app-20160921072336-0000/7 for CarbonAnalytics. {org.apache.spark.deploy.worker.Worker}
java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
    at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:246)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:191)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:180)
    at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:75)
    at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:472)
    at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
    at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
    at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
    at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
TID: [-1] [] [2016-09-21 11:15:15,251] ERROR {org.apache.spark.deploy.worker.Worker} -  Failed to launch executor app-20160921072336-0000/8 for CarbonAnalytics. {org.apache.spark.deploy.worker.Worker}
java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
    at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:246)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:191)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:180)
    at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:75)
    at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:472)
    at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
    at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
    at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
    at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
TID: [-1] [] [2016-09-21 11:15:15,242]  WARN {org.apache.spark.scheduler.TaskSetManager} -  Lost task 0.1 in stage 559864.0 (TID 394124, 192.168.1.41): FetchFailed(null, shuffleId=1935, mapId=-1, reduceId=12, message=
org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 1935
    at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:548)
    at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:544)
    at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
    at org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:544)
    at org.apache.spark.MapOutputTracker.getMapSizesByExecutorId(MapOutputTracker.scala:155)
    at org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:47)
    at org.apache.spark.sql.execution.ShuffledRowRDD.compute(ShuffledRowRDD.scala:166)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

) {org.apache.spark.scheduler.TaskSetManager}
TID: [-1] [] [2016-09-21 11:15:15,262] ERROR {org.apache.spark.scheduler.LiveListenerBus} -  SparkListenerBus has already stopped! Dropping event SparkListenerTaskEnd(559864,0,ResultTask,FetchFailed(null,1935,-1,12,org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 1935
    at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:548)
    at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:544)
    at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
    at org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:544)
    at org.apache.spark.MapOutputTracker.getMapSizesByExecutorId(MapOutputTracker.scala:155)
    at org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:47)
    at org.apache.spark.sql.execution.ShuffledRowRDD.compute(ShuffledRowRDD.scala:166)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
),org.apache.spark.scheduler.TaskInfo@7290bdbd,null) {org.apache.spark.scheduler.LiveListenerBus}
TID: [-1] [] [2016-09-21 11:15:15,264] ERROR {org.apache.spark.deploy.worker.Worker} -  Failed to launch executor app-20160921072336-0000/9 for CarbonAnalytics. {org.apache.spark.deploy.worker.Worker}
java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
    at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:246)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:191)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:180)
    at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:75)
    at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:472)
    at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
    at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
    at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
    at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
TID: [-1] [] [2016-09-21 11:15:15,281] ERROR {org.apache.spark.scheduler.LiveListenerBus} -  SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo@28cfd536) {org.apache.spark.scheduler.LiveListenerBus}
TID: [-1] [] [2016-09-21 11:15:15,294]  WARN {org.apache.spark.deploy.master.Master} -  Got status update for unknown executor app-20160921072336-0000/2 {org.apache.spark.deploy.master.Master}
TID: [-1] [] [2016-09-21 11:15:15,296] ERROR {org.apache.spark.scheduler.DAGSchedulerEventProcessLoop} -  DAGSchedulerEventProcessLoop failed; shutting down SparkContext {org.apache.spark.scheduler.DAGSchedulerEventProcessLoop}
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@33489d49 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@270a5824[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
    at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:326)
    at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1268)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1637)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
TID: [-1] [] [2016-09-21 11:15:15,299] ERROR {org.apache.spark.deploy.worker.Worker} -  Failed to launch executor app-20160921072336-0000/10 for CarbonAnalytics. {org.apache.spark.deploy.worker.Worker}
java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
    at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:246)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:191)
    at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:180)
    at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:75)
    at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:472)
    at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
    at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
    at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
    at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
TID: [-1] [] [2016-09-21 11:15:15,309] ERROR {org.apache.spark.scheduler.LiveListenerBus} -  SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo@65f05efa) {org.apache.spark.scheduler.LiveListenerBus}
TID: [-1] [] [2016-09-21 11:15:15,312] ERROR {org.apache.spark.scheduler.LiveListenerBus} -  SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(281355,1474470915312,JobFailed(org.apache.spark.SparkException: Job 281355 cancelled as part of cancellation of all jobs)) {org.apache.spark.scheduler.LiveListenerBus}
TID: [-1] [] [2016-09-21 11:15:15,313]  WARN {org.apache.spark.deploy.master.Master} -  Got status update for unknown executor app-20160921072336-0000/3 {org.apache.spark.deploy.master.Master}
TID: [-1] [] [2016-09-21 11:15:15,322]  WARN {org.apache.spark.deploy.master.Master} -  Got status update for unknown executor app-20160921072336-0000/4 {org.apache.spark.deploy.master.Master}
TID: [-1] [] [2016-09-21 11:15:15,330] ERROR {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in executing task: execute, tree:
Exchange rangepartitioning(aggOrder#55841 ASC,aggOrder#55842 ASC,200), None
+- ConvertToSafe
   +- TungstenAggregate(key=[tenantID#55839,class#55835,byYear#55836,byMonth#55837], functions=[(sum(cast(classCount#55838 as bigint)),mode=Final,isDistinct=false)], output=[tenantID#55816,logClass#55817,totalClassCount#55818L,_timestamp#55819,aggOrder#55841,aggOrder#55842])
      +- TungstenExchange hashpartitioning(tenantID#55839,class#55835,byYear#55836,byMonth#55837,200), None
         +- TungstenAggregate(key=[tenantID#55839,class#55835,byYear#55836,byMonth#55837], functions=[(sum(cast(classCount#55838 as bigint)),mode=Partial,isDistinct=false)], output=[tenantID#55839,class#55835,byYear#55836,byMonth#55837,sum#55853L])
            +- Project [class#55835,byYear#55836,byMonth#55837,classCount#55838,tenantID#55839]
               +- Sort [aggOrder#55840L ASC], true, 0
                  +- ConvertToUnsafe
                     +- Exchange rangepartitioning(aggOrder#55840L ASC,200), None
                        +- ConvertToSafe
                           +- TungstenAggregate(key=[tenantID#55736,class#55737,_timestamp#55739L], functions=[(first(classCount#55738)(),mode=Final,isDistinct=false)], output=[class#55835,byYear#55836,byMonth#55837,classCount#55838,tenantID#55839,aggOrder#55840L])
                              +- TungstenExchange hashpartitioning(tenantID#55736,class#55737,_timestamp#55739L,200), None
                                 +- TungstenAggregate(key=[tenantID#55736,class#55737,_timestamp#55739L], functions=[(first(classCount#55738)(),mode=Partial,isDistinct=false)], output=[tenantID#55736,class#55737,_timestamp#55739L,first#55850,valueSet#55851])
                                    +- Scan org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation@9530639[tenantID#55736,class#55737,classCount#55738,_timestamp#55739L] 
 {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter}
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange rangepartitioning(aggOrder#55841 ASC,aggOrder#55842 ASC,200), None
+- ConvertToSafe
   +- TungstenAggregate(key=[tenantID#55839,class#55835,byYear#55836,byMonth#55837], functions=[(sum(cast(classCount#55838 as bigint)),mode=Final,isDistinct=false)], output=[tenantID#55816,logClass#55817,totalClassCount#55818L,_timestamp#55819,aggOrder#55841,aggOrder#55842])
      +- TungstenExchange hashpartitioning(tenantID#55839,class#55835,byYear#55836,byMonth#55837,200), None
         +- TungstenAggregate(key=[tenantID#55839,class#55835,byYear#55836,byMonth#55837], functions=[(sum(cast(classCount#55838 as bigint)),mode=Partial,isDistinct=false)], output=[tenantID#55839,class#55835,byYear#55836,byMonth#55837,sum#55853L])
            +- Project [class#55835,byYear#55836,byMonth#55837,classCount#55838,tenantID#55839]
               +- Sort [aggOrder#55840L ASC], true, 0
                  +- ConvertToUnsafe
                     +- Exchange rangepartitioning(aggOrder#55840L ASC,200), None
                        +- ConvertToSafe
                           +- TungstenAggregate(key=[tenantID#55736,class#55737,_timestamp#55739L], functions=[(first(classCount#55738)(),mode=Final,isDistinct=false)], output=[class#55835,byYear#55836,byMonth#55837,classCount#55838,tenantID#55839,aggOrder#55840L])
                              +- TungstenExchange hashpartitioning(tenantID#55736,class#55737,_timestamp#55739L,200), None
                                 +- TungstenAggregate(key=[tenantID#55736,class#55737,_timestamp#55739L], functions=[(first(classCount#55738)(),mode=Partial,isDistinct=false)], output=[tenantID#55736,class#55737,_timestamp#55739L,first#55850,valueSet#55851])
                                    +- Scan org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation@9530639[tenantID#55736,class#55737,classCount#55738,_timestamp#55739L] 

    at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:49)
    at org.apache.spark.sql.execution.Exchange.doExecute(Exchange.scala:247)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
    at org.apache.spark.sql.execution.ConvertToUnsafe.doExecute(rowFormatConverters.scala:38)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
    at org.apache.spark.sql.execution.Sort.doExecute(Sort.scala:64)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
    at org.apache.spark.sql.execution.Project.doExecute(basicOperators.scala:46)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
    at org.apache.spark.sql.execution.datasources.InsertIntoDataSource.run(InsertIntoDataSource.scala:39)
    at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
    at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
    at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
    at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
    at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
    at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
    at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:776)
    at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:737)
    at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
    at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
    at org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:60)
    at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
    at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Job 281355 cancelled as part of cancellation of all jobs
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
    at org.apache.spark.scheduler.DAGScheduler.handleJobCancellation(DAGScheduler.scala:1370)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply$mcVI$sp(DAGScheduler.scala:713)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:713)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:713)
    at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
    at org.apache.spark.scheduler.DAGScheduler.doCancelAllJobs(DAGScheduler.scala:713)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onError(DAGScheduler.scala:1649)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:52)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
    at org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:264)
    at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:126)
    at org.apache.spark.sql.execution.Exchange.prepareShuffleDependency(Exchange.scala:179)
    at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1.apply(Exchange.scala:254)
    at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1.apply(Exchange.scala:248)
    at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:48)
    ... 48 more
TID: [-1] [] [2016-09-21 11:15:15,359]  WARN {org.wso2.carbon.analytics.dataservice.core.indexing.IndexNodeCoordinator} -  Error in sending remote record batch delete to member: Member [192.168.1.41]:4020with node id: 0543a174-2bd7-4130-a762-60f4ad43cbaa: Hazelcast instance is not active! -> adding to staging area for later pickup.. {org.wso2.carbon.analytics.dataservice.core.indexing.IndexNodeCoordinator}
TID: [-1] [] [2016-09-21 11:15:15,365]  WARN {org.apache.spark.deploy.master.Master} -  Got status update for unknown executor app-20160921072336-0000/5 {org.apache.spark.deploy.master.Master}
TID: [-1] [] [2016-09-21 11:15:15,368]  WARN {org.apache.spark.deploy.master.Master} -  Got status update for unknown executor app-20160921072336-0000/7 {org.apache.spark.deploy.master.Master}
TID: [-1] [] [2016-09-21 11:15:15,479]  WARN {org.apache.spark.deploy.master.Master} -  Got status update for unknown executor app-20160921072336-0000/1 {org.apache.spark.deploy.master.Master}
TID: [-1] [] [2016-09-21 11:15:15,478] ERROR {org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer} -  Error in processing index batch operations: java.lang.InterruptedException: sleep interrupted {org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer}
org.apache.lucene.util.ThreadInterruptedException: java.lang.InterruptedException: sleep interrupted
    at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:405)
    at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:278)
    at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:223)
    at org.apache.lucene.index.SegmentInfos.write(SegmentInfos.java:509)
    at org.apache.lucene.index.SegmentInfos.prepareCommit(SegmentInfos.java:731)
    at org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4319)
    at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2763)
    at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2866)
    at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2833)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.updateIndex(AnalyticsDataIndexer.java:1419)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.processIndexOperationBatch(AnalyticsDataIndexer.java:349)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.processLocalShardDataQueue(AnalyticsDataIndexer.java:304)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.processIndexOperationsSlice(AnalyticsDataIndexer.java:235)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.processIndexOperations(AnalyticsDataIndexer.java:223)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.access$100(AnalyticsDataIndexer.java:85)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer$IndexWorker.run(AnalyticsDataIndexer.java:2184)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
    Suppressed: java.nio.channels.ClosedByInterruptException
        at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
        at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:390)
        at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:395)
        ... 18 more
Caused by: java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:403)
    ... 18 more
TID: [-1] [] [2016-09-21 11:15:16,048] ERROR {org.apache.spark.util.Utils} -  Uncaught exception in thread Thread-67 {org.apache.spark.util.Utils}
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
    at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
    at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
    at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
    at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
    at scala.concurrent.Await$.result(package.scala:107)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
    at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
    at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
    at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:225)
    at org.apache.spark.storage.BlockManagerMaster.stop(BlockManagerMaster.scala:217)
    at org.apache.spark.SparkEnv.stop(SparkEnv.scala:97)
    at org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1756)
    at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:1755)
    at org.apache.spark.SparkContext$$anonfun$3.apply$mcV$sp(SparkContext.scala:596)
    at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:267)
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:239)
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1801)
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:239)
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
    at scala.util.Try$.apply(Try.scala:161)
    at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:239)
    at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:218)
    at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
TID: [-1] [] [2016-09-21 11:15:16,051]  WARN {org.apache.hadoop.util.ShutdownHookManager} -  ShutdownHook '$anon$2' failed, java.lang.InterruptedException {org.apache.hadoop.util.ShutdownHookManager}
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
    at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
    at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
    at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
    at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
    at scala.concurrent.Await$.result(package.scala:107)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
    at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
    at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
    at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:225)
    at org.apache.spark.storage.BlockManagerMaster.stop(BlockManagerMaster.scala:217)
    at org.apache.spark.SparkEnv.stop(SparkEnv.scala:97)
    at org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1756)
    at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:1755)
    at org.apache.spark.SparkContext$$anonfun$3.apply$mcV$sp(SparkContext.scala:596)
    at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:267)
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:239)
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1801)
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:239)
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
    at scala.util.Try$.apply(Try.scala:161)
    at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:239)
    at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:218)
    at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
TID: [-1] [] [2016-09-21 11:15:16,405]  INFO {org.wso2.carbon.event.processor.manager.commons.transport.server.TCPEventServer} -  Closing listener socket. Connection closed from remote end. {org.wso2.carbon.event.processor.manager.commons.transport.server.TCPEventServer}
TID: [-1] [] [2016-09-21 11:15:17,438]  WARN {org.apache.spark.deploy.master.Master} -  Got status update for unknown executor app-20160921072336-0000/6 {org.apache.spark.deploy.master.Master}
TID: [-1] [] [2016-09-21 11:15:17,439]  WARN {org.apache.spark.deploy.master.Master} -  Got status update for unknown executor app-20160921072336-0000/8 {org.apache.spark.deploy.master.Master}
TID: [-1] [] [2016-09-21 11:15:17,441]  WARN {org.apache.spark.deploy.master.Master} -  Got status update for unknown executor app-20160921072336-0000/9 {org.apache.spark.deploy.master.Master}
TID: [-1] [] [2016-09-21 11:15:17,442]  WARN {org.apache.spark.deploy.master.Master} -  Got status update for unknown executor app-20160921072336-0000/10 {org.apache.spark.deploy.master.Master}
TID: [-1234] [] [2016-09-21 11:15:17,945]  INFO {org.wso2.carbon.tomcat.ext.internal.CarbonTomcatServiceComponent} -  Stopping the carbon web-app registered under : / {org.wso2.carbon.tomcat.ext.internal.CarbonTomcatServiceComponent}
TID: [-1234] [] [2016-09-21 11:15:18,042]  INFO {org.wso2.carbon.metrics.impl.reporter.AbstractReporter} -  Stopped JDBC reporter for Metrics {org.wso2.carbon.metrics.impl.reporter.AbstractReporter}
TID: [-1234] [] [2016-09-21 11:15:18,048]  INFO {org.wso2.carbon.metrics.impl.reporter.AbstractReporter} -  Stopped JMX reporter for Metrics {org.wso2.carbon.metrics.impl.reporter.AbstractReporter}
TID: [-1234] [] [2016-09-21 11:15:18,054]  INFO {org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizer} -  Terminating the synchronizer on file system at: /home/wso2/Documents/MB40/40/M3/wso2analytics-mb-3.2.0-M3/repository/deployment/server/ {org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizer}
TID: [-1] [] [2016-09-21 11:15:18,222] ERROR {org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer} -  Error in processing index batch operations: java.lang.InterruptedException: sleep interrupted {org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer}
org.apache.lucene.util.ThreadInterruptedException: java.lang.InterruptedException: sleep interrupted
    at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:405)
    at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:278)
    at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:223)
    at org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4336)
    at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2763)
    at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2866)
    at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2833)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.updateIndex(AnalyticsDataIndexer.java:1419)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.processIndexOperationBatch(AnalyticsDataIndexer.java:349)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.processLocalShardDataQueue(AnalyticsDataIndexer.java:304)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.processIndexOperationsSlice(AnalyticsDataIndexer.java:235)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.processIndexOperations(AnalyticsDataIndexer.java:223)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.access$100(AnalyticsDataIndexer.java:85)
    at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer$IndexWorker.run(AnalyticsDataIndexer.java:2184)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
    Suppressed: java.nio.channels.ClosedByInterruptException
        at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
        at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:390)
        at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:395)
        ... 16 more
Caused by: java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:403)
    ... 16 more
TID: [-1] [] [2016-09-21 11:15:22,896] ERROR {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in executing task: null {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter}
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Object.wait(Object.java:502)
    at org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:612)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1915)
    at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.writeDataFrameToDAL(AnalyticsRelation.java:233)
    at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:223)
    at org.apache.spark.sql.execution.datasources.InsertIntoDataSource.run(InsertIntoDataSource.scala:40)
    at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
    at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
    at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
    at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
    at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
    at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
    at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:776)
    at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:737)
    at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
    at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
    at org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:60)
    at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
    at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
TID: [-1234] [] [2016-09-21 11:15:25,984]  INFO {org.wso2.carbon.core.init.CarbonServerManager} -  Shutdown complete {org.wso2.carbon.core.init.CarbonServerManager}
TID: [-1234] [] [2016-09-21 11:15:25,984]  INFO {org.wso2.carbon.core.init.CarbonServerManager} -  Halting JVM {org.wso2.carbon.core.init.CarbonServerManager}

Attachements: conf41.tar.gz conf40.zip

Suggested Labels:

Suggested Assignees:

Affected Product Version:

OS, DB, other environment details and versions:
Pack:- MB 3.2.0 - M3 Setup:- 2 Node MB cluster + 2 Node Analytics cluster DB:- PostgreSQL JDK:- Oracle 1.8

Steps to reproduce:

Related Issues: https://wso2.org/jira/browse/ANLYMB-22 https://wso2.org/jira/browse/MB-1785

GDLMadushanka commented 4 years ago

Analytics upgraded to a new product (Stream Processor)