Graylog2 / graylog2-server

Free and open log management
https://www.graylog.org
Other
7.33k stars 1.06k forks source link

Java heap (memory) issues. #151

Closed sebyfrancis closed 11 years ago

sebyfrancis commented 11 years ago

The Graylog2 server is in a funky state after I see the memory issue as below. I've the full logs with me if you need.

2013-06-04 11:01:07,765 WARN : org.elasticsearch.monitor.jvm - [graylog2-server] [gc][PS Scavenge][557579][81500] duration [10.8s], collections [1]/[11.1s], total [10.8s]/[27.6m], memory [558.2mb]->[301mb]/[853.3mb], all_pools {[Code Cache] [5.7mb]->[5.7mb]/[48mb]}{[PS Eden Space] [258mb]->[3.8mb]/[261.2mb]}{[PS Survivor Space] [25.7mb]->[15.8mb]/[28.8mb]}{[PS Old Gen] [274.4mb]->[281.3mb]/[640mb]}{[PS Perm Gen] [30mb]->[30mb]/[166mb]} 2013-06-04 16:18:10,111 WARN : org.elasticsearch.transport - [graylog2-server] Transport response handler not found of id [127747715]

com.rabbitmq.client.AlreadyClosedException: clean connection shutdown; reason: Attempt to use closed channel at com.rabbitmq.client.impl.AMQChannel.ensureIsOpen(AMQChannel.java:190) at com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:291) at com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:285) at com.rabbitmq.client.impl.ChannelN.basicNack(ChannelN.java:912) at org.graylog2.inputs.amqp.AMQPConsumer$1.handleDelivery(AMQPConsumer.java:227) at com.rabbitmq.client.impl.ConsumerDispatcher$4.run(ConsumerDispatcher.java:121) at com.rabbitmq.client.impl.ConsumerWorkService$WorkPoolRunnable.run(ConsumerWorkService.java:76) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) DefaultExceptionHandler: Consumer org.graylog2.inputs.amqp.AMQPConsumer$1@77c95042 (amq.ctag-I34TM-G9191ZV0_STIHIHQ) method handleDelivery for channel AMQChannel(amqp://guest@192.168.3.100:5672/,1) threw an exception for channel AMQChannel(amqp://guest@192.168.3.100:5672/,1): com.rabbitmq.client.AlreadyClosedException: clean connection shutdown; reason: Attempt to use closed channel at com.rabbitmq.client.impl.AMQChannel.ensureIsOpen(AMQChannel.java:190) at com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:291) at com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:285) at com.rabbitmq.client.impl.ChannelN.basicNack(ChannelN.java:912) at org.graylog2.inputs.amqp.AMQPConsumer$1.handleDelivery(AMQPConsumer.java:227) at com.rabbitmq.client.impl.ConsumerDispatcher$4.run(ConsumerDispatcher.java:121) at com.rabbitmq.client.impl.ConsumerWorkService$WorkPoolRunnable.run(ConsumerWorkService.java:76) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) DefaultExceptionHandler: Consumer org.graylog2.inputs.amqp.AMQPConsumer$1@77c95042 (amq.ctag-I34TM-G9191ZV0_STIHIHQ) method handleDelivery for channel AMQChannel(amqp://guest@192.168.3.100:5672/,1) threw an exception for channel AMQChannel(amqp://guest@192.168.3.100:5672/,1):




Jun 04, 2013 4:30:30 PM com.lmax.disruptor.FatalExceptionHandler handleEventException SEVERE: Exception processing: 131041536 org.graylog2.buffers.LogMessageEvent@75296dec java.lang.OutOfMemoryError: GC overhead limit exceeded

Exception in thread "outputbufferprocessor-8" java.lang.RuntimeException: java.lang.OutOfMemoryError: GC overhead limit exceeded at com.lmax.disruptor.FatalExceptionHandler.handleEventException(FatalExceptionHandler.java:45) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:139) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded 2013-06-04 16:25:34,166 WARN : org.elasticsearch.transport.netty - [graylog2-server] exception caught on transport layer [[id: 0x2c617e5c, /192.168.3.100:46851 => /192.168.3.100:9300]], closing connection java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.Arrays.copyOfRange(Arrays.java:2694) at java.lang.String.(String.java:203) at org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:220) at org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:74) at org.elasticsearch.action.index.IndexResponse.readFrom(IndexResponse.java:137) at org.elasticsearch.action.bulk.BulkItemResponse.readFrom(BulkItemResponse.java:298) at org.elasticsearch.action.bulk.BulkItemResponse.readFrom(BulkItemResponse.java:298) at org.elasticsearch.action.bulk.BulkItemResponse.readBulkItem(BulkItemResponse.java:286) at org.elasticsearch.action.bulk.BulkShardResponse.readFrom(BulkShardResponse.java:59) at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:148) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:127) at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:555) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) 2013-06-04 16:32:14,829 WARN : org.elasticsearch.discovery.zen - [graylog2-server] master_left and no other node elected to become master, current nodes: {[graylog2-server][eYdLBD8ASpSFYOZPEgPrZw][inet[/192.168.3.100:9301]]{client=true, data=false, master=false},} 2013-06-04 16:26:28,270 WARN : org.elasticsearch.transport.netty - [graylog2-server] exception caught on transport layer [[id: 0x0a9d4e52, /192.168.3.100:46853 => /192.168.3.100:9300]], closing connection java.lang.OutOfMemoryError: GC overhead limit exceeded 2013-06-04 16:32:14,912 WARN : org.elasticsearch.transport.netty - [graylog2-server] exception caught on transport layer [[id: 0x0a9d4e52, /192.168.3.100:46853 :> /192.168.3.100:9300]], closing connection java.io.StreamCorruptedException: invalid internal transport message format at org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:27) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:555) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) 2013-06-04 16:32:14,913 WARN : org.elasticsearch.transport.netty - [graylog2-server] exception caught on transport layer [[id: 0x0a9d4e52, /192.168.3.100:46853 :> /192.168.3.100:9300]], closing connection java.io.StreamCorruptedException: invalid internal transport message format at org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:27) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.cleanup(FrameDecoder.java:482) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.channelDisconnected(FrameDecoder.java:365) at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:555) at org.elasticsearch.common.netty.channel.Channels.fireChannelDisconnected(Channels.java:396) at org.elasticsearch.common.netty.channel.Channels$4.run(Channels.java:386) at org.elasticsearch.common.netty.channel.socket.ChannelRunnableWrapper.run(ChannelRunnableWrapper.java:40) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) 2013-06-04 16:26:18,288 WARN : org.elasticsearch.transport.netty - [graylog2-server] exception caught on transport layer [[id: 0x806e45dc, /192.168.3.100:46850 => /192.168.3.100:9300]], closing connection java.lang.OutOfMemoryError: GC overhead limit exceeded 2013-06-04 16:26:07,146 WARN : org.elasticsearch.transport.netty - [graylog2-server] exception caught on transport layer [[id: 0x07b0df22, /192.168.3.100:45566 :> /192.168.3.100:9301]], closing connection java.lang.OutOfMemoryError: GC overhead limit exceeded

lennartkoopmann commented 11 years ago

Can you describe the funky state? :)

The fix for ElasticSearch is to give it more heap space. How to do it depends on your setup but is usually controlled by environment variables in your ElasticSearch init file.

sebyfrancis commented 11 years ago

The graylog2/elasticsearch seems to be in a hung state as it is not receiving/indexing any messages that we are sending.

Let me try to increase more heap space for ES and will let you know.

Regards, Seby.

kroepke commented 11 years ago

How much memory does it have now? It's very important that ElasticSearch nodes never swap. ES is relying heavily on file system cache, so be sure to leave enough free memory for the OS.

sebyfrancis commented 11 years ago

Currently the server has 4gb. On 7 Jun 2013 14:05, "Kay Roepke" notifications@github.com wrote:

How much memory does it have now? It's very important that ElasticSearch nodes never swap. ES is relying heavily on file system cache, so be sure to leave enough free memory for the OS.

— Reply to this email directly or view it on GitHubhttps://github.com/Graylog2/graylog2-server/issues/151#issuecomment-19095171 .

lennartkoopmann commented 11 years ago

What do you mean with server? The graylog2-server, ElasticSearch or the whole machine?

sebyfrancis commented 11 years ago

Graylog2 and ES are running on a single server with 4gb ram. On 7 Jun 2013 19:41, "Lennart Koopmann" notifications@github.com wrote:

What do you mean with server? The graylog2-server, ElasticSearch or the whole machine?

— Reply to this email directly or view it on GitHubhttps://github.com/Graylog2/graylog2-server/issues/151#issuecomment-19109175 .

kroepke commented 11 years ago

Unless you have very little data, that will be too little memory by far.

sebyfrancis commented 11 years ago

How much do you suggest? I see that we have around 1500 messages per second (gui shows that) during day time. On 7 Jun 2013 23:44, "Kay Roepke" notifications@github.com wrote:

Unless you have very little data, that will be too little memory by far.

— Reply to this email directly or view it on GitHubhttps://github.com/Graylog2/graylog2-server/issues/151#issuecomment-19123767 .

kroepke commented 11 years ago

I would start with giving Elasticsearch 6GB and graylog2 2-4GB. The machine should have around 4-6GB free for filesystem caches. These numbers depend highly on how much querying you do (graylog2 does faceting which leads to higher memory requirements for elasticsearch). We should add some monitoring of heap memory usage for the next graylog2 release. I'll add a feature request and link to it.

sebyfrancis commented 11 years ago

I've increased the RAM size to 8GB from 4GB and will monitor if it helps. I'm trying to run the graylog2 applications on a single machine.

sebyfrancis commented 11 years ago

This looks good after we added the "ES_HEAP_SIZE" in ES Wrapper configuration file.

MatthewSimon commented 8 years ago

I am being a noob please can you show me what yours looks like. where is the wrapper and what changes do i make?

Please help

joschi commented 8 years ago

@MatthewSimon Please post to our mailing list or join #graylog on Freenode to discuss your problem.