LMD fetches Livestatus data from one or multiple sources and provides:
So basically this is a "Livestatus In / Livestatus Out" daemon. Its main purpose is to provide the backend handling of the Thruk Monitoring Gui in a native compiled fast daemon, but it should work for everything which requires Livestatus.
Log table requests and commands are just passed through to the actual backends.
You will need
to compile lmd.
After starting LMD, it fetches all tables via Livestatus API from all configured remote backends. It then polls periodically all dynamic parts of the objects, like host status, plugin output or downtime status.
When there are no incoming connections to LMD, it switches into the idle mode with a slower poll interval. As soon as the first client requests some data, LMD will do a spin up and run a synchronous update (with small timeout) and change back to the normal poll interval.
If you want to use LMD with Thruk within OMD, see the omd/lmd page for a quick start. The OMD-Labs Edition is already prepared to use LMD.
%> go install github.com/sni/lmd/v2/lmd@latest
or
%> git clone https://github.com/sni/lmd
%> cd lmd
%> make
Quick start with command line parameters:
lmd -o listen=:3333 -o connection=test,remote_host:6557
Or copy lmd.ini.example to lmd.ini and change to your needs. Then run lmd.
You can specify the path to your config file with --config
.
lmd --config=/etc/lmd/lmd.ini
The configuration is explained in the lmd.ini.example
in detail.
There are several different connection types.
Remote Livestatus connections via tcp can be defined as:
[[Connections]]
name = "Monitoring Site A"
id = "id1"
source = ["192.168.33.10:6557"]
If the source is a cluster, you can specify multiple addresses like
source = ["192.168.33.10:6557", "192.168.33.20:6557"]
Local unix sockets Livestatus connections can be defined as:
[[Connections]]
name = "Monitoring Site A"
id = "id1"
source = ["/var/tmp/nagios/live.sock"]
It is possible to operate LMD in a cluster mode which means multiple LMDs connect to a network and share the resources. All backend connections will be split up and divided upon all cluster nodes. Incoming requests will be forwarded and merged.
In order to setup cluster operations, you need to add a http(s) listener and a list of nodes. All nodes should share the same configuration file.
Listen = ["/var/tmp/lmd.sock", "http://*:8080"]
Nodes = ["http://10.0.0.1:8080", "http://10.0.0.2:8080"]
There are some new/changed Livestatus query headers:
The default OutputFormat is wrapped_json
but json
is also supported.
The wrapped_json
format will put the normal json
result in a hash with
some more extra meta data:
The only ResponseHeader supported right now is fixed16
.
There is a new Backends header which may set a space separated list of backends. If none specific, all are returned.
ex.:
Backends: id1 id2
The offset header can be used to only retrieve a subset of the complete result set. Best used together with the sort header.
Offset: 100
Limit: 10
This will return entries 100-109 from the overall result set.
The sort header can be used to sort the results by one or more columns. Multiple sort header can be used.
Sort: <column name> <asc/desc>
Sorting by custom variables is possible like this:
Sort: custom_variables <name> <asc/desc>
ex.:
GET hosts
Sort: state asc
Sort: name desc
Sort: custom_variables WORKER asc
The improved performance comes at a price of course. The following numbers should give you a rough idea on what to expect: An example installation with 200.000 services at a 3 second update interval uses around 1.5gB of memory and 200kByte/s of bandwidth. This makes an average of 7kB memory and 1Byte/s of bandwidth usage per service.
However your milage may vary, these number heavily depend on the size of the plugin output and the check interval of your services. Use the Prometheus exporter to create nice graphs to see how your environment differs.
Btw, changing the update interval to 30 seconds does not reduce the used bandwidth, you just have to update many services every 30 seconds than small packages every 3 seconds.
LMD can be started with a golang tcp debug profiler:
lmd -config lmd.ini -debug-profiler localhost:6060
You can then fetch memory heap profile with:
curl http://localhost:6060/debug/pprof/heap --output heap.tar.gz
or directly run the profiler from go:
go tool pprof -web http://localhost:6060/debug/pprof/heap
Accordingly cpu profile can be created by:
curl http://localhost:6060/debug/pprof/profile --output cpu.tar.gz
Those profiles can then be further processed by gos pprof tool:
go tool pprof cpu.tar.gz
Some ideas may or may not be implemented in the future