Icinga / icinga2

The core of our monitoring platform with a powerful configuration language and REST API.
https://icinga.com/docs/icinga2/latest
GNU General Public License v2.0
1.98k stars 570 forks source link

Load explodes after every reload/restart of Icinga 2 #5465

Open pgress opened 6 years ago

pgress commented 6 years ago

We got a 2 master-cluster which gets very high load, when the core is reloaded. The two Nodes contain 8vCPUs and 16GB RAM. So the Power shouldn't be a problem at all. We got now about 700 Hosts with about 7000 Services in it.

We debugged this Problem already a little bit and found out, that there is no Check done until 5 to 6 minutes are gone. After this Duration alle checks start together which results in the high load. When we use a single-node instead of an Cluster, then we don't have the Problem. The Checks start immediately after the reload.

Object 'host002.localdomain' of type 'Endpoint': % declared in '/etc/icinga2/zones.conf', lines 5:1-5:48

Object 'master' of type 'Zone': % declared in '/etc/icinga2/zones.conf', lines 9:1-9:20

dnsmichi commented 6 years ago

Do you have a specific performance analysis including graphs from work queues and enabled features, checks, etc ? It is hard to tell what exactly could cause this without more insights.

https://www.icinga.com/docs/icinga2/latest/doc/15-troubleshooting/#analyze-your-environment

pgress commented 6 years ago

Here is a snapshot from some of our Graphs: https://snapshot.raintank.io/dashboard/snapshot/XX5gtmp2yf4nnIXJ1oIpWt1FA263Um2S?orgId=2 It shows the binding between Load and Uptime. Additionally all Perfdata from the Icinga-Check is in the third panel. When I watched it live, i could see that all CPU Cores were at max. Used Memory was getting higher, but not getting full. There were only minor IO on the Disk. I've upload an part of the icinga2-log where we can see that there didn't happen anything some minutes. icinga2-log.txt

dnsmichi commented 6 years ago

Thanks for the graphs. The last one uses the icinga check, which provides additional metrics about work queues in 2.7 - do you happen to have some stats/graphs on that too?

Logs look fine, nothing spectacular. This change in cpu load could come from the recent work queue additions for all features, i.e. graphite.

(for future reference - I modified the URL with render/ and added the screenshot here)

xx5gtmp2yf4nnixj1oipwt1fa263um2s

dnsmichi commented 6 years ago

Does this happen with 2.8 again? There were certain improvements for the cluster in this release.

Cheers, Michael

uffsalot commented 6 years ago

In my setup yes.

Single Master:

Our instance contains 512 Hosts an 5368 Services. 5 of them are Icinga2 Clients, the remaining hosts are checked by NRPE.

2018-01-17 10_37_13-grafana - icinga2 load

Simkimdm commented 6 years ago

In our setup it happens, too. a year ago it was so bad that the master Cluster could not catch anymore. So we had to expand our cluster with some satellites. Most of our checks based on SNMP like check_nwc_health. We have just 7959 Hosts and 16156 Service. I tried to flat the peaks through the limitation of concurrent_checks = 256. but this have no effect on the Version 2,8.0.

dnsmichi commented 6 years ago

How many of these services actually invoke check_nwc_health? How's the average execution time and latency for these checks?

Thomas-Gelf commented 6 years ago

@dnsmichi: this is a real issue, root cause is our scheduling code in Icinga 2. Have a look at how Icinga 1.x tried to avoid such problems. It was far from being perfect, the 1.x scheduling code is a mess - but it's basics where well thought.

This issue is a combination of checks being rescheduled at restart (that's a bug, shouldn't happen) combined with a rescheduling/splaying logic where every checkable (and not a central scheduling logic) decides on it's own when to run the next check.

There is a related issue in our Icinga Support issue tracker. You'll find it combined with debug log, hints how to filter the log and a link pointing to the place in our code responsible for the "checks are not being executed for a long time" effect as explained by @pgress. Eventually talk to @gunnarbeutner, he should be aware of this issue - we discussed it quite some time ago.

The "reschedule everything when starting" issue should be easy to fix. Most of the heavy spikes shown by @uffsalot would then no longer appear. As you can see he has an environment with not too many monitored objects and not the greatest and latest but absolutely sufficient hardware. Especially given that most of his checks are NRPE-based, he should never experience what his graphs are showing.

Cheers, Thomas

NB: Sooner or later we should consider implementing a scheduler logic taking the amount of active checks, their interval and their average execution time into account. It should try to fairly distribute new checks while respecting and preserving current schedule for existing ones.

paladox commented 6 years ago

We were also seeing the same thing when we tryed to upgraded to icinga2 @ miraheze using 1 core and 1gb of ram.

The cpu shot up after checks started running causing OOM errors and high cpu.

screen shot 2018-04-26 at 01 11 52
dnsmichi commented 6 years ago

There are some changes to this in current git master and the snapshot packages which will run into 2.9. This is scheduled for June.

paladox commented 6 years ago

@dnsmichi oh to reduce load?

Which changes? :)

dnsmichi commented 6 years ago

To influence check scheduling upon reload. Snapshot packages should be available for testing already.

widhalmt commented 6 years ago

I have the same problem in a customers setup. I'll try to get some tests / feedback from them as well.

paladox commented 6 years ago

Im guessing this https://github.com/Icinga/icinga2/commit/1a9c1591c0c13603b1dee6cfb514e6ec7c309450 is the fix.

Crunsher commented 6 years ago

1a9c159 is not about this issue. While it might redistribute the load of early checks.

NB: Sooner or later we should consider implementing a scheduler logic taking the amount of active checks, their interval and their average execution time into account. It should try to fairly distribute new checks while respecting and preserving current schedule for existing ones.

Here we have the problem that we do not know much about the checks and can't make many guesses based on that information. A high execution time does not mean high load, a check with a long check interval might not have that because of it being a heavy check and the amount of active checks tells us nothing concrete either. The only thing that comes to my mind is randomizing execution checks with similar check intervals better.

dnsmichi commented 6 years ago

I'd suggest to test the snapshot packages and report back whether the problem persists or not.

Crunsher commented 5 years ago

Anycast ping, is someone experiencing this issue still with a recent Icinga 2 version?

dnsmichi commented 5 years ago

Might also be related to a setup where the master actively connects to all clients, in #6517.

paladox commented 5 years ago

We have experienced less load since using 2.9. We made sure only one check is run at one time.

(this is with nagios-nrpe-server (check_nrpe)) we doin't use icinga2 client.

MarcusCaepio commented 5 years ago

I also can see this with the very latest version 2.9.2. After a config reload, Satelites get a very huge load. image

dnsmichi commented 5 years ago

Any chance you'll try the snapshot packages?

MarcusCaepio commented 5 years ago

Unfortunately not right at the moment, as I don't have an identical dev cluster right now. But if I can help with any further infos (total checks, plugins, etc), I would love to do it :)

dnsmichi commented 5 years ago

I believe that the load is caused by the reconnect timer, or many incoming connections with many separate threads being spawned. A full analysis is available in #6517.

MarcusCaepio commented 5 years ago

Still present in 2.10: Master on Reload: image

Sats on reload image image

dnsmichi commented 5 years ago

Ok, then @Thomas-Gelf was right about the scheduler. I was just guessing from the recent changes, and it is good to know that the possible areas are boiled down with a recent version, thanks.

MarcusCaepio commented 5 years ago

Will this issue get a priority for the next release?

lippserd commented 5 years ago

ref/NC/601009

NeverUsedID commented 4 years ago

I can confirm that in r2.11.2-1

Master1: image

image

Master2 (No idea why the first reload had no impact on Master 2): image

It will stack up if i reload in very short intervals

Some Satellites (pretty sure they hav'nt had a config change): image

samitks commented 4 years ago

In our setup, we are experiencing the same issue. As per our observations, I can see the load goes to double of the number of CPU cores we have. (For 8 cores, it goes to 16). Not sure if this is the case with everyone.

Is there any ETA when this will get fixed? If there is any way temporarily to avoid this issue, please let me know.

dnsmichi commented 4 years ago

There's currently no ETA since all resources are bound to the JSON-RPC crash debugging, holding off the 2.12 and IcingaDB release.

samitks commented 4 years ago

Thanks for the response @dnsmichi We debugged more on the satellite issue happening in our infrastructure and found that it was being caused due to the replay logs. After setting up the log_duration to 0 on all agents and setting it to 1h on masters and satellites has fixed this for us. Now there are no load issues on satellites and all the checks are being executed perfectly. We have also changed the connectivity direction from agents -> satellites which was previously satellites -> agents. (However, I don't think this can be related to the load issue on reload)

Can you please let us know that will these changes impact something in overall icinga monitoring or everything should be highly available, reliable and stable?

dnsmichi commented 4 years ago

There are known problems with the log replay, especially with agents. Hence the evaluation issue #7752 ... reducing this for satellites/masters with expected downtimes <1h, and 0 to command endpoint agents (not needed at all) will likely be a good workaround for the future.

phibos commented 3 years ago

Anything we can do to help to resolve this issue for the next release?

MarcusCaepio commented 3 years ago

I don't have this issue anymore with the latest icinga2 version

pluhin commented 3 years ago

Hi All,

Still have this issue. I have two satellites with 8CPU and 16GB, in zone I have ~250 hosts and ~4000 services

image

# icinga2 --version
icinga2 - The Icinga 2 network monitoring daemon (version: 2.12.0-1)

Copyright (c) 2012-2021 Icinga GmbH (https://icinga.com/)
License GPLv2+: GNU GPL version 2 or later <http://gnu.org/licenses/gpl2.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

System information:
  Platform: Red Hat Enterprise Linux Server
  Platform version: 7.8 (Maipo)
  Kernel: Linux
  Kernel version: 3.10.0-1127.18.2.el7.x86_64
  Architecture: x86_64

Build information:
  Compiler: GNU 4.8.5
  Build host: runner-hh8q3bz2-project-322-concurrent-0
  OpenSSL version: OpenSSL 1.0.2k-fips  26 Jan 2017

Application information:

General paths:
  Config directory: /etc/icinga2
  Data directory: /var/lib/icinga2
  Log directory: /var/log/icinga2
  Cache directory: /var/cache/icinga2
  Spool directory: /var/spool/icinga2
  Run directory: /run/icinga2

Old paths (deprecated):
  Installation root: /usr
  Sysconf directory: /etc
  Run directory (base): /run
  Local state directory: /var

Internal paths:
  Package data directory: /usr/share/icinga2
  State path: /var/lib/icinga2/icinga2.state
  Modified attributes path: /var/lib/icinga2/modified-attributes.conf
  Objects path: /var/cache/icinga2/icinga2.debug
  Vars path: /var/cache/icinga2/icinga2.vars
  PID path: /run/icinga2/icinga2.pid
pluhin commented 3 years ago

I found in api/log folder a lot of big log files, after cleaning - the satellites became cold

phibos commented 3 years ago

Anything we can do to help to resolve this issue for the next release?

We were able to fix this issue with the latest version and changing some default config values.

Al2Klimov commented 2 years ago

Which ones?

phibos commented 2 years ago

Which ones?

By default we had disabled the replay logs only on the command endpoints but now we have also disabled the replay logs on the monitoring server for all command endpoints

object Endpoint "icinga2-agent1.localdomain" {

  log_duration = 0
}
Al2Klimov commented 2 years ago

Colleagues, don't we recommend to do exactly that?

N-o-X commented 2 years ago

Yes, we do. It's disabled in all our example configs using command endpoint agents in our distributed monitoring docs and we even have a dedicated section: https://icinga.com/docs/icinga-2/latest/doc/06-distributed-monitoring/#disable-log-duration-for-command-endpoints

Al2Klimov commented 2 years ago

@pgress Please try this.

Al2Klimov commented 1 year ago

Anyone else of you who also got this fixed via https://github.com/Icinga/icinga2/issues/5465#issuecomment-954650829 ?

pluhin commented 1 year ago

@Al2Klimov did you add log log_duration = 0 to agents or to satellites?

julianbrost commented 1 year ago

You should set that on all Endpoint objects that represent a connection from or to an agent, see also https://icinga.com/docs/icinga-2/latest/doc/06-distributed-monitoring/#disable-log-duration-for-command-endpoints

pluhin commented 1 year ago

I've applied this solution, seems, did not help for me, but it was maybe year ago.

I had: 2 HA masters, 2x4 HA satellites. I added this parameter for satellites + one big zone, began to work well, but after few redeployment of master config (ansible remove /etc/icinga2 and create new with form repository), problem was back for me

pgress commented 1 year ago

Hey i dont work for the company anymore so i cant reproduce this issue anymore. So i will unassign this isssue.

pluhin commented 1 year ago

Let's me try I added log_duration = 0 to all agents and satellites in master configuration

pluhin commented 9 months ago

Hi All Several updates: