Open garanews opened 6 years ago
Hi guys, seeing more or less the same after upgrading to 3.1.2-1. It seems it slowly runs OOM within a day. I thought it might be due to Cortex now also querying Elasticsearch, but during the moment of crash Cortex shouldn't have anything to do. I hope you guys can help us, because the environment breaks after less than a day after being started. See stack traces below...
2018-10-19 23:28:17,918 [INFO] from connectors.misp.MispSynchro in application-akka.actor.default-dispatcher-17 - Update of MISP events is starting ...
2018-10-19 23:28:18,383 [INFO] from connectors.misp.MispSynchro in application-akka.actor.default-dispatcher-16 - Synchronize MISP MISP-extern from Some(Wed Oct 17 15:52:08 CEST 2018)
2018-10-19 23:29:02,955 [INFO] from connectors.misp.MispSynchro in application-akka.actor.default-dispatcher-15 - Misp synchronization failed
java.net.ConnectException: handshake timed out
at play.shaded.ahc.org.asynchttpclient.netty.channel.NettyConnectListener.onFailure(NettyConnectListener.java:168)
at play.shaded.ahc.org.asynchttpclient.netty.channel.NettyConnectListener$1.onFailure(NettyConnectListener.java:139)
at play.shaded.ahc.org.asynchttpclient.netty.SimpleFutureListener.operationComplete(SimpleFutureListener.java:26)
at play.shaded.ahc.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at play.shaded.ahc.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
at play.shaded.ahc.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
at play.shaded.ahc.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
at play.shaded.ahc.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:122)
at play.shaded.ahc.io.netty.handler.ssl.SslHandler.notifyHandshakeFailure(SslHandler.java:1443)
at play.shaded.ahc.io.netty.handler.ssl.SslHandler.access$1100(SslHandler.java:161)
at play.shaded.ahc.io.netty.handler.ssl.SslHandler$5.run(SslHandler.java:1613)
at play.shaded.ahc.io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at play.shaded.ahc.io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
at play.shaded.ahc.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:399)
at play.shaded.ahc.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:464)
at play.shaded.ahc.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at play.shaded.ahc.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.net.ssl.SSLException: handshake timed out
at play.shaded.ahc.io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source)
2018-10-19 23:29:15,917 [INFO] from connectors.misp.MispSynchro in application-akka.actor.default-dispatcher-18 - Update of MISP events is starting ...
2018-10-19 23:29:15,918 [INFO] from connectors.misp.MispSynchro in application-akka.actor.default-dispatcher-14 - Synchronize MISP MISP-extern from Some(Wed Oct 17 15:52:08 CEST 2018)
2018-10-19 23:29:16,174 [INFO] from connectors.misp.MispSynchro in application-akka.actor.default-dispatcher-19 - Misp synchronization completed
2018-10-19 23:30:17,922 [INFO] from connectors.misp.MispSynchro in application-akka.actor.default-dispatcher-2 - Update of MISP events is starting ...
2018-10-19 23:30:18,361 [INFO] from connectors.misp.MispSynchro in application-akka.actor.default-dispatcher-14 - Synchronize MISP MISP-extern from Some(Wed Oct 17 15:52:08 CEST 2018)
2018-10-19 23:31:17,694 [INFO] from connectors.misp.MispSynchro in application-akka.actor.default-dispatcher-18 - Misp synchronization completed
2018-10-19 23:32:12,043 [ERROR] from akka.dispatch.TaskInvocation in application-akka.actor.default-dispatcher-17 - Futures timed out after [10 seconds]
java.util.concurrent.TimeoutException: Futures timed out after [10 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:255)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:259)
at scala.concurrent.Await$.$anonfun$result$1(package.scala:215)
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:167)
at akka.dispatch.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3641)
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:165)
at scala.concurrent.Await$.result(package.scala:142)
at com.sksamuel.elastic4s.ElasticApi$RichFuture.await(ElasticApi.scala:79)
at org.elastic4play.database.DBIndex.$anonfun$indexStatus$1(DBIndex.scala:100)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:12)
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:167)
at akka.dispatch.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3641)
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:165)
at scala.concurrent.package$.blocking(package.scala:142)
at org.elastic4play.database.DBIndex.indexStatus(DBIndex.scala:100)
at org.elastic4play.services.MigrationSrv.isReady(MigrationSrv.scala:177)
at connectors.misp.MispSynchro.$anonfun$initScheduler$1(MispSynchro.scala:48)
at akka.actor.Scheduler$$anon$2.run(Scheduler.scala:83)
at akka.actor.LightArrayRevolverScheduler$$anon$2$$anon$1.run(LightArrayRevolverScheduler.scala:101)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:43)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
2018-10-19 23:40:03,858 [INFO] from org.elastic4play.ErrorHandler in application-akka.actor.default-dispatcher-14 - POST /api/alert returned 400
org.elastic4play.ConflictError: [alert][ac4497dea354bafef44364d20b754f2c]: version conflict, document already exists (current version [2])
at org.elastic4play.database.DBCreate.convertError(DBCreate.scala:81)
at org.elastic4play.database.DBCreate.convertError(DBCreate.scala:80)
at org.elastic4play.database.DBCreate.$anonfun$apply$10(DBCreate.scala:76)
at scala.concurrent.Future.$anonfun$transform$3(Future.scala:241)
at scala.concurrent.Future.$anonfun$transform$1(Future.scala:241)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:43)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
2018-10-19 23:57:38,642 [INFO] from org.elasticsearch.client.transport.TransportClientNodesService in elasticsearch[_client_][generic][T#1] - failed to get node info for {#transport#-1}{mTnQJOu6TSuJ8Oryp4q7jQ}{127.0.0.1}{127.0.0.1:9300}, disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][127.0.0.1:9300][cluster:monitor/nodes/liveness] request_id [257447] timed out after [14012ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-10-19 23:57:39,540 [WARN] from org.elasticsearch.transport.TransportService in elasticsearch[_client_][transport_client_boss][T#4] - Received response for a request that has timed out, sent [45242ms] ago, timed out [31230ms] ago, action [cluster:monitor/nodes/liveness], node [{#transport#-1}{mTnQJOu6TSuJ8Oryp4q7jQ}{127.0.0.1}{127.0.0.1:9300}], id [257447]
2018-10-19 23:58:16,536 [WARN] from play.shaded.ahc.io.netty.util.concurrent.DefaultPromise in AsyncHttpClient-8-11 - An exception was thrown by play.shaded.ahc.org.asynchttpclient.netty.request.NettyChannelConnector$1.operationComplete()
java.lang.OutOfMemoryError: Java heap space
at sun.security.ssl.InputRecord.<init>(InputRecord.java:93)
at sun.security.ssl.EngineInputRecord.<init>(EngineInputRecord.java:63)
at sun.security.ssl.SSLEngineImpl.init(SSLEngineImpl.java:426)
at sun.security.ssl.SSLEngineImpl.<init>(SSLEngineImpl.java:357)
at sun.security.ssl.SSLContextImpl$AbstractTLSContext.createSSLEngineImpl(SSLContextImpl.java:465)
at sun.security.ssl.SSLContextImpl.engineCreateSSLEngine(SSLContextImpl.java:203)
at javax.net.ssl.SSLContext.createSSLEngine(SSLContext.java:361)
at play.shaded.ahc.org.asynchttpclient.netty.ssl.JsseSslEngineFactory.newSslEngine(JsseSslEngineFactory.java:31)
at play.shaded.ahc.org.asynchttpclient.netty.channel.ChannelManager.createSslHandler(ChannelManager.java:398)
at play.shaded.ahc.org.asynchttpclient.netty.channel.ChannelManager.addSslHandler(ChannelManager.java:449)
at play.shaded.ahc.org.asynchttpclient.netty.channel.NettyConnectListener.onSuccess(NettyConnectListener.java:115)
at play.shaded.ahc.org.asynchttpclient.netty.request.NettyChannelConnector$1.onSuccess(NettyChannelConnector.java:92)
at play.shaded.ahc.org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:26)
at play.shaded.ahc.org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:20)
at play.shaded.ahc.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at play.shaded.ahc.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
at play.shaded.ahc.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
at play.shaded.ahc.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
at play.shaded.ahc.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
at play.shaded.ahc.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
at play.shaded.ahc.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:257)
at play.shaded.ahc.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:292)
at play.shaded.ahc.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:634)
at play.shaded.ahc.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:581)
at play.shaded.ahc.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)
at play.shaded.ahc.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:460)
at play.shaded.ahc.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at play.shaded.ahc.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
2018-10-19 23:58:36,658 [WARN] from play.shaded.ahc.io.netty.util.HashedWheelTimer in pool-3-thread-1 - An exception was thrown by TimerTask.
java.lang.OutOfMemoryError: Java heap space
2018-10-19 23:59:24,362 [WARN] from org.jboss.netty.channel.socket.nio.AbstractNioSelector in New I/O boss #3 - Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
at java.util.HashMap$KeySet.iterator(HashMap.java:917)
at java.util.HashSet.iterator(HashSet.java:173)
at java.util.Collections$UnmodifiableCollection$1.<init>(Collections.java:1039)
at java.util.Collections$UnmodifiableCollection.iterator(Collections.java:1038)
at org.jboss.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:118)
at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-10-20 00:00:04,450 [WARN] from play.shaded.ahc.io.netty.channel.DefaultChannelPipeline in AsyncHttpClient-6-14 - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
play.shaded.ahc.io.netty.channel.ChannelPipelineException: play.shaded.ahc.org.asynchttpclient.netty.channel.ChannelManager$2.handlerAdded() has thrown an exception; removed.
at play.shaded.ahc.io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:610)
at play.shaded.ahc.io.netty.channel.DefaultChannelPipeline.access$000(DefaultChannelPipeline.java:44)
at play.shaded.ahc.io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1355)
at play.shaded.ahc.io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1090)
at play.shaded.ahc.io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:640)
at play.shaded.ahc.io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:456)
at play.shaded.ahc.io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:378)
at play.shaded.ahc.io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:428)
at play.shaded.ahc.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:399)
at play.shaded.ahc.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:464)
at play.shaded.ahc.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at play.shaded.ahc.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1019)
at java.util.concurrent.ConcurrentHashMap.putIfAbsent(ConcurrentHashMap.java:1535)
at play.shaded.ahc.io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:111)
at play.shaded.ahc.io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:105)
at play.shaded.ahc.io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:590)
... 12 common frames omitted
2018-10-20 00:02:34,781 [INFO] from org.elasticsearch.client.transport.TransportClientNodesService in elasticsearch[_client_][generic][T#4] - failed to get node info for {#transport#-1}{mTnQJOu6TSuJ8Oryp4q7jQ}{127.0.0.1}{127.0.0.1:9300}, disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][127.0.0.1:9300][cluster:monitor/nodes/liveness] request_id [257449] timed out after [27350ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-10-20 00:03:16,379 [WARN] from org.elasticsearch.transport.TransportService in elasticsearch[_client_][transport_client_boss][T#5] - Received response for a request that has timed out, sent [88255ms] ago, timed out [60905ms] ago, action [cluster:monitor/nodes/liveness], node [{#transport#-1}{mTnQJOu6TSuJ8Oryp4q7jQ}{127.0.0.1}{127.0.0.1:9300}], id [257449]
2018-10-20 00:06:16,865 [ERROR] from akka.actor.ActorSystemImpl in application-akka.actor.default-dispatcher-16 - Uncaught error from thread [application-akka.actor.default-dispatcher-14]: Java heap space, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[application]
java.lang.OutOfMemoryError: Java heap space
at akka.dispatch.Envelope$.apply(AbstractDispatcher.scala:28)
at akka.actor.Cell.sendMessage(ActorCell.scala:354)
at akka.actor.Cell.sendMessage$(ActorCell.scala:353)
at akka.actor.ActorCell.sendMessage(ActorCell.scala:433)
at akka.actor.LocalActorRef.$bang(ActorRef.scala:400)
at akka.stream.impl.io.TcpConnectionStage$TcpStreamLogic$$anon$6.onPush(TcpStages.scala:329)
at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:519)
at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:411)
at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:585)
at akka.stream.impl.fusing.ActorGraphInterpreter$SimpleBoundaryEvent.execute(ActorGraphInterpreter.scala:44)
at akka.stream.impl.fusing.ActorGraphInterpreter$SimpleBoundaryEvent.execute$(ActorGraphInterpreter.scala:40)
at akka.stream.impl.fusing.ActorGraphInterpreter$BatchingActorInputBoundary$OnNext.execute(ActorGraphInterpreter.scala:77)
at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:560)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:742)
at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:757)
at akka.actor.Actor.aroundReceive(Actor.scala:517)
at akka.actor.Actor.aroundReceive$(Actor.scala:515)
at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:667)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:590)
at akka.actor.ActorCell.invoke(ActorCell.scala:559)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
2018-10-20 00:06:19,465 [INFO] from play.core.server.AkkaHttpServer in Thread-5 - Stopping server...
2018-10-20 00:10:19,104 [ERROR] from akka.actor.ActorSystemImpl in application-akka.actor.default-dispatcher-3 - Uncaught error from thread [application-akka.actor.default-dispatcher-5926]: Java heap space, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[application]
java.lang.OutOfMemoryError: Java heap space
2018-10-20 00:11:23,112 [WARN] from org.elasticsearch.transport.TransportService in elasticsearch[_client_][transport_client_boss][T#10] - Received response for a request that has timed out, sent [30827ms] ago, timed out [12305ms] ago, action [cluster:monitor/nodes/liveness], node [{#transport#-1}{mTnQJOu6TSuJ8Oryp4q7jQ}{127.0.0.1}{127.0.0.1:9300}], id [257459]
2018-10-20 00:12:44,368 [INFO] from org.elasticsearch.client.transport.TransportClientNodesService in elasticsearch[_client_][generic][T#1] - failed to get node info for {#transport#-1}{mTnQJOu6TSuJ8Oryp4q7jQ}{127.0.0.1}{127.0.0.1:9300}, disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][127.0.0.1:9300][cluster:monitor/nodes/liveness] request_id [257459] timed out after [18522ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Request Type
Bug
Work Environment
Problem Description
The hive service crashes regularly due to Java heap space, don't know if related to MISP sync. Current RAM settings: ES 12GB TheHive 8GB No webhooks configured.
tail -f thehive_err.log
tail -f thehive.log
tail -f application.log