Graylog2 / graylog2-server

Free and open log management
https://www.graylog.org
Other
7.31k stars 1.05k forks source link

Graylog memory sizing docs #15753

Open pasztorl opened 1 year ago

pasztorl commented 1 year ago

What?

I've not found information about how to plan graylog memory sizing.

Why?

I've installed graylog with default parameters and with no running inputs it eats 3GB+ of memory. Also when I check the System/Nodes it reports around 500Mb jvm heap. When I check with "top" command it reports 600M+ res and 4G+ virt. When I set jvm opts -Xms500m -Xmx500m The result is the same. This is normal? How it scales when adding input traffic?

If i set the kubernetes resource limit to 4G of ram the graylog process oom killed.

On the official faq i found this: "Isn’t Java slow? Does it need a lot of memory?

This is a concern that we hear from time to time. We understand Java has a bad reputation from slow and laggy desktop/GUI applications that eat a lot of memory. However, we are usually able to prove this assumption wrong. Well written Java code for server systems is very efficient and does not need a lot of memory resources. Give it a try, you might be surprised!"

I think this section is not so useful without numbers and/or more info about memory sizing. I've also tested 5.0 some time ago, the difference was that version also reports long GC times (even if no traffic received).

Your Environment

T100D commented 1 year ago

To me this is normal behavior as in Linux systems memory shall be fully occupied because of cashing before swapping.

pasztorl commented 1 year ago

I understand how linux systems do this. I just kindly ask an advice about graylog memory requiements.

T100D commented 1 year ago

We are running it in default on a 4GB/4 cores server with default settings. What i have read is that you can push that up to 8GB/8 cores before bringing it to a dual configuration if processing of logs require that. Java can be set at 2GB/2GB, but never read that is a specific requirement, but more something that could be due to lots of processes messages, usage of pipilines and elasticsearch backend.

There is a system memory setting to not use temporary disk memory. We never have turned off the omm killer and do not endure this exceptions. Running latest 4.x version. You could turn off the omm feature, or set it to be less agressive.

Version 5 uses it's own java library.

pasztorl commented 1 year ago

That is no option for me to disable the resource limits because without this I can't calculate how many workloads can fit to the machines running graylog. Can you point in the docs about the setting what you mentioned? Can you provide info about what JVM options are you using?

I'm still not convinced why it takes more than 4GB of memory to run graylog without getting traffic. In addition, it would also be very useful to know how much memory you have to count on in terms of the log messages that come to it.