Closed andreaconsadori closed 9 years ago
Could you please post the complete Graylog server logs? The first occurrence of the OutOfMemoryError would be interesting.
2015-03-11T22:23:11.805+01:00 ERROR [NettyContainer] Uncaught exception in transport layer. This is likely a bug, closing channel.
java.lang.OutOfMemoryError: Java heap space
2015-03-11T22:23:07.859+01:00 WARN [AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
2015-03-11T22:20:27.725+01:00 WARN [AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
2015-03-11T22:18:30.212+01:00 WARN [AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
2015-03-11T22:18:27.225+01:00 ERROR [NettyContainer] Uncaught exception in transport layer. This is likely a bug, closing channel.
MultiException stack 1 of 3
java.lang.OutOfMemoryError: Java heap space
MultiException stack 2 of 3
java.lang.IllegalArgumentException: While attempting to resolve the dependencies of org.graylog2.rest.resources.system.ClusterResource errors were found
at org.jvnet.hk2.internal.ClazzCreator.resolveAllDependencies(ClazzCreator.java:249)
at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:360)
at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:471)
at org.glassfish.jersey.process.internal.RequestScope.findOrCreate(RequestScope.java:160)
at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2270)
at org.jvnet.hk2.internal.ServiceLocatorImpl.internalGetService(ServiceLocatorImpl.java:687)
at org.jvnet.hk2.internal.ServiceLocatorImpl.getService(ServiceLocatorImpl.java:652)
at org.glassfish.jersey.internal.inject.Injections.getOrCreate(Injections.java:169)
at org.glassfish.jersey.server.model.MethodHandler$ClassBasedMethodHandler.getInstance(MethodHandler.java:185)
at org.glassfish.jersey.server.internal.routing.PushMethodHandlerRouter.apply(PushMethodHandlerRouter.java:74)
at org.glassfish.jersey.server.internal.routing.RoutingStage._apply(RoutingStage.java:112)
at org.glassfish.jersey.server.internal.routing.RoutingStage._apply(RoutingStage.java:116)
at org.glassfish.jersey.server.internal.routing.RoutingStage._apply(RoutingStage.java:116)
at org.glassfish.jersey.server.internal.routing.RoutingStage._apply(RoutingStage.java:116)
at org.glassfish.jersey.server.internal.routing.RoutingStage._apply(RoutingStage.java:116)
at org.glassfish.jersey.server.internal.routing.RoutingStage.apply(RoutingStage.java:94)
at org.glassfish.jersey.server.internal.routing.RoutingStage.apply(RoutingStage.java:63)
at org.glassfish.jersey.process.internal.Stages.process(Stages.java:197)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:263)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:297)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:254)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1030)
at org.graylog2.jersey.container.netty.NettyContainer.messageReceived(NettyContainer.java:356)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
at org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at org.jboss.netty.handler.execution.MemoryAwareThreadPoolExecutor$MemoryAwareRunnable.run(MemoryAwareThreadPoolExecutor.java:622)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
MultiException stack 3 of 3
java.lang.IllegalStateException: Unable to perform operation: resolve on org.graylog2.rest.resources.system.ClusterResource
at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:389)
at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:471)
at org.glassfish.jersey.process.internal.RequestScope.findOrCreate(RequestScope.java:160)
at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2270)
at org.jvnet.hk2.internal.ServiceLocatorImpl.internalGetService(ServiceLocatorImpl.java:687)
at org.jvnet.hk2.internal.ServiceLocatorImpl.getService(ServiceLocatorImpl.java:652)
at org.glassfish.jersey.internal.inject.Injections.getOrCreate(Injections.java:169)
at org.glassfish.jersey.server.model.MethodHandler$ClassBasedMethodHandler.getInstance(MethodHandler.java:185)
at org.glassfish.jersey.server.internal.routing.PushMethodHandlerRouter.apply(PushMethodHandlerRouter.java:74)
at org.glassfish.jersey.server.internal.routing.RoutingStage._apply(RoutingStage.java:112)
at org.glassfish.jersey.server.internal.routing.RoutingStage._apply(RoutingStage.java:116)
at org.glassfish.jersey.server.internal.routing.RoutingStage._apply(RoutingStage.java:116)
at org.glassfish.jersey.server.internal.routing.RoutingStage._apply(RoutingStage.java:116)
at org.glassfish.jersey.server.internal.routing.RoutingStage._apply(RoutingStage.java:116)
at org.glassfish.jersey.server.internal.routing.RoutingStage.apply(RoutingStage.java:94)
at org.glassfish.jersey.server.internal.routing.RoutingStage.apply(RoutingStage.java:63)
at org.glassfish.jersey.process.internal.Stages.process(Stages.java:197)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:263)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:297)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:254)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1030)
at org.graylog2.jersey.container.netty.NettyContainer.messageReceived(NettyContainer.java:356)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
at org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at org.jboss.netty.handler.execution.MemoryAwareThreadPoolExecutor$MemoryAwareRunnable.run(MemoryAwareThreadPoolExecutor.java:622)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Are there any error messages caused by OutOfMemoryError before that? The problem with OOM errors is, that they tend to repeat and only the first occurrence really gives useful information.
ok i clean the log and i start the server, asap it crash i will send the full log. p.s. how can i send a zipped log here? i see i can attach only image
@andreaconsadori You'll have to upload the archive somewhere else and add a link here.
maybe this is the issue?
2015-03-12T16:28:36.824+01:00 ERROR [NettyContainer] Uncaught exception in transport layer. This is likely a bug, closing channel.
java.lang.OutOfMemoryError: Java heap space
2015-03-12T16:26:44.656+01:00 WARN [AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
2015-03-12T16:25:29.908+01:00 INFO [AbstractValidatingSessionManager] Validating all active sessions...
2015-03-12T16:25:26.209+01:00 INFO [AbstractValidatingSessionManager] Validating all active sessions...
2015-03-12T16:28:40.154+01:00 WARN [netty] [graylog2-server] Message not fully read (response) for [70612] handler org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$6@ef50b9, error [false], resetting
2015-03-12T16:28:40.153+01:00 ERROR [NettyContainer] Uncaught exception in transport layer. This is likely a bug, closing channel.
java.lang.OutOfMemoryError: Java heap space
2015-03-12T16:28:40.597+01:00 ERROR [ServiceManager] Service JournalReader [FAILED] has failed in the RUNNING state.
java.lang.OutOfMemoryError: Java heap space
2015-03-12T16:28:40.596+01:00 WARN [netty] [graylog2-server] exception caught on transport layer [[id: 0x4455b758, /192.168.0.95:49720 => /192.168.0.95:9350]], closing connection
java.io.IOException: Pipe interrotta
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:51)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at org.elasticsearch.common.netty.channel.socket.nio.SocketSendBufferPool$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:203)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:201)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:146)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:99)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:574)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:704)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:671)
at org.elasticsearch.common.netty.channel.AbstractChannel.write(AbstractChannel.java:248)
at org.elasticsearch.transport.netty.NettyTransportChannel.sendResponse(NettyTransportChannel.java:97)
at org.elasticsearch.transport.netty.NettyTransportChannel.sendResponse(NettyTransportChannel.java:68)
at org.elasticsearch.discovery.zen.fd.NodesFaultDetection$PingRequestHandler.messageReceived(NodesFaultDetection.java:299)
at org.elasticsearch.discovery.zen.fd.NodesFaultDetection$PingRequestHandler.messageReceived(NodesFaultDetection.java:283)
at org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:217)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:111)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
What does your graylog server.conf look like? Apparently your system is already almost out of memory when it starts, usually graylog should be around 300m of memory, and not almost 1GB.
This could point to a misconfiguration where buffers are way too large. On Mar 13, 2015 8:14 AM, "andreaconsadori" notifications@github.com wrote:
maybe this is the issue? 2015-03-12T16:28:36.824+01:00 ERROR [NettyContainer] Uncaught exception in transport layer. This is likely a bug, closing channel. java.lang.OutOfMemoryError: Java heap space 2015-03-12T16:26:44.656+01:00 WARN [AbstractNioSelector] Unexpected exception in the selector loop. java.lang.OutOfMemoryError: Java heap space 2015-03-12T16:25:29.908+01:00 INFO [AbstractValidatingSessionManager] Validating all active sessions... 2015-03-12T16:25:26.209+01:00 INFO [AbstractValidatingSessionManager] Validating all active sessions... 2015-03-12T16:28:40.154+01:00 WARN [netty] [graylog2-server] Message not fully read (response) for [70612] handler org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$6@ef50b9, error [false], resetting 2015-03-12T16:28:40.153+01:00 ERROR [NettyContainer] Uncaught exception in transport layer. This is likely a bug, closing channel. java.lang.OutOfMemoryError: Java heap space 2015-03-12T16:28:40.597+01:00 ERROR [ServiceManager] Service JournalReader [FAILED] has failed in the RUNNING state. java.lang.OutOfMemoryError: Java heap space 2015-03-12T16:28:40.596+01:00 WARN [netty] [graylog2-server] exception caught on transport layer [[id: 0x4455b758, /192.168.0.95:49720 => /192.168.0.95:9350]], closing connection java.io.IOException: Pipe interrotta at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:51) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487) at org.elasticsearch.common.netty.channel.socket.nio.SocketSendBufferPool$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:203) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:201) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:146) at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:99) at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:574) at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:704) at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:671) at org.elasticsearch.common.netty.channel.AbstractChannel.write(AbstractChannel.java:248) at org.elasticsearch.transport.netty.NettyTransportChannel.sendResponse(NettyTransportChannel.java:97) at org.elasticsearch.transport.netty.NettyTransportChannel.sendResponse(NettyTransportChannel.java:68) at org.elasticsearch.discovery.zen.fd.NodesFaultDetection$PingRequestHandler.messageReceived(NodesFaultDetection.java:299) at org.elasticsearch.discovery.zen.fd.NodesFaultDetection$PingRequestHandler.messageReceived(NodesFaultDetection.java:283) at org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:217) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:111) at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
— Reply to this email directly or view it on GitHub https://github.com/Graylog2/graylog2-server/issues/1057#issuecomment-78843944 .
root@Graylog:~# cat /etc/graylog/server/server.conf | grep buffer
# that every outputbuffer processor manages its own batch and performs its own batch write calls.
# ("outputbuffer_processors" variable)
# for this time period is less than output_batch_size * outputbuffer_processors.
# Raise this number if your buffers are filling up.
processbuffer_processors = 5
outputbuffer_processors = 3
#outputbuffer_processor_keep_alive_time = 5000
#outputbuffer_processor_threads_core_pool_size = 3
#outputbuffer_processor_threads_max_pool_size = 30
# UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).
#udp_recvbuffer_sizes = 1048576
# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)
# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.
# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.
# Start server with --statistics flag to see buffer utilization.
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
i increase inputbuffer_ring_size = 65536 because was too small (i've firewall connection logs that fill the default value)
What about the other ring sizes? On Mar 13, 2015 9:10 AM, "andreaconsadori" notifications@github.com wrote:
root@Graylog:~# cat /etc/graylog/server/server.conf | grep buffer that every outputbuffer processor manages its own batch and performs its own batch write calls. ("outputbuffer_processors" variable) for this time period is less than output_batch_size * outputbuffer_processors. Raise this number if your buffers are filling up.
processbuffer_processors = 5 outputbuffer_processors = 3
outputbuffer_processor_keep_alive_time = 5000
outputbuffer_processor_threads_core_pool_size = 3
outputbuffer_processor_threads_max_pool_size = 30
UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).
udp_recvbuffer_sizes = 1048576
Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping) Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore. For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache. Start server with --statistics flag to see buffer utilization.
inputbuffer_ring_size = 65536 inputbuffer_processors = 2 inputbuffer_wait_strategy = blocking
i increase inputbuffer_ring_size = 65536 because was too small (i've firewall connection logs that fill the default value)
— Reply to this email directly or view it on GitHub https://github.com/Graylog2/graylog2-server/issues/1057#issuecomment-78860012 .
ring_size = 65536 inputbuffer_ring_size = 65536
my mistake, previously i change ring_size to 262144 because in graylog under nodes and detail i found the buffer always full
Ok, because this directly impacts the amount of messages that can be in memory.
Do you have extremely large messages? If both ring sizes are set to 65k then you will have a maximum of 3 x 64k messages in memory (if all buffers fill up). This should usually not be a problem with 1GB memory. The rest of the system should take up a lot less memory.
I noticed you put your -Xmx4g and -Xms2g params in the wrong place, at the end of the command line (from console.log). Best to replace the existing memory flags in the ctl script.
The rest of the errors comes from the fact that garbage collection will take longer and longer as the system has less memory. But what worries me is that your mongodb connection seems to time out. That should not happen. Is the rest of your setup fast enough?
Also check your ulimit for the process:
2015-03-12T14:04:38.361+01:00 WARN [NettyTransport] receiveBufferSize (SO_RCVBUF) for [id: 0x03298a20, /0:0:0:0:0:0:0:0:1514] should be 1048576 but is 131071.
great, it works now, maybe put a warning note in config file for non-java users :)
Hi there
I maybe have a similar problem. Suddenly my Graylog stopped processing messages. Input is ok but the system does not further processing. In the server.log I see the following error
2015-08-04T12:25:36.543+02:00 ERROR [ServiceManager] Service JournalReader [FAILED] has failed in the RUNNING state.
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
at kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:188)
at kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:165)
at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
at org.graylog2.shared.journal.KafkaJournal.read(KafkaJournal.java:455)
at org.graylog2.shared.journal.KafkaJournal.read(KafkaJournal.java:420)
at org.graylog2.shared.journal.JournalReader.run(JournalReader.java:136)
at com.google.common.util.concurrent.AbstractExecutionThreadService$1$2.run(AbstractExecutionThreadService.java:60)
at com.google.common.util.concurrent.Callables$3.run(Callables.java:95)
at java.lang.Thread.run(Thread.java:745)
I cannot get the server working again. Restart or reboot has no effect... Can somebody give me a hint in the right direction?
Thank you in advance for any help. Best regards, Stefan
My Graylog 1.0.0 debian install stop working after 7 hours more or less
in graylog-server log i found "2015-03-11T22:33:28.068+01:00 WARN [AbstractNioSelector] Unexpected exception in the selector loop. java.lang.OutOfMemoryError: Java heap space"
this is my /etc/default/graylog-server
and this my elasticserach default settings