If you have any feedback regarding this monitoring/logging/alerting suite, any ideas for improvement, fixes, questions or comments, please feel free to contact me or do a PR!
Blog post on Medium with some more elaboration.
This is a secure out-of-the-box monitoring, logging and alerting suite for Docker-hosts and their containers, complete with dashboards to monitor and explore your host and container logs and metrics.
Monitoring: cAdvisor and node_exporter for collection, Prometheus for storage, Grafana for visualisation.
Logging: Filebeat for collection and log-collection and forwarding, Logstash for aggregation and processing, Elasticsearch as datastore/backend and Kibana as the frontend.
Alerting: elastalert as a drop-in for Elastic.io's Watcher for alerts triggered by certain container or host log events and Prometheus' Alertmanager for alerts regarding metrics.
Security: The whole suite can be run in secure
mode, which places jwilder's nginx reverse proxy (with JrCs letsencrypt companion) in front of the suite. This reverse proxy then handles all traffic to and from the suite, forces https, fully automates initial SSL certificate issuance and renewal, provides basic auth for all dashboards and allows to forgo any port forwarding from the suite containers to the host machine.
Of course you can then also use this nginx reverse proxy with the exact same mechanism to manage traffic to and from your other containers like applications, databases, api endpoints and what have you.
The Grafana dashboard (a bit slimmed down) can also be found on grafana.net: https://grafana.net/dashboards/395.
This repository comes with a storage directoriy for Grafana that contains the configuration for the data sources and the dashboard. This directoriy will be mounted into the containers as volumes. This is for your convenience and eliminates some manual setup steps.
+++
Note: With the update to ELK 6.3.0 the indices from the initial commits of the repository were causing errors. I thus removed them, so unfortunately there are no more convenience dashboards for Kibana anymore.
This also means, that you will have to set up the indices yourself in the beginning. But that will be an easy excercise, the new Kibana does a good job in assisting with that.
Also, give the whole stack a bit of time to start up. I noticed, that it takes up to 3 minutes for the first logs to arrive and for Kibana to suggest index patterns.
+++
git clone
this repository: git clone https://github.com/uschtwill/docker_monitoring_logging_alerting.git
cd
into the folder: cd docker_monitoring_logging_alerting
install-prerequisites.sh
and make sure they're fulfilled (or just run the script if the host is a fresh machine).setup.sh
.secure
mode run sh setup.sh secure YOUR_DOMAIN VERY_STRONG_PASSWORD
sh setup.sh secure YOUR_DOMAIN VERY_STRONG_PASSWORD
will set up the suite in secure mode, effectively:
docker-compose.yml
and add a container_group
label to enable monitoring, logging and alerting for them.sh cleanup.sh secure
.unsecure
mode run sh setup.sh unsecure
.curl localhost:9200
.docker-compose.yml
and add a container_group
label to enable monitoring, logging and alerting for them.sh cleanup.sh unsecure
.For debugging: In case you would like certain containers to log to stdout
because you're having trouble with ELK or simply because it feels more natural to you, you can simply comment out the logging options for individual containers. Logs of those containers will go to stdout
while the logs for all other containers will continue to go to logstash
.
# logging:
# driver: gelf
# options:
# gelf-address: udp://172.16.0.38:12201
# labels: container_group
This suite uses elastalert and Alertmanager for alerting. Rules for logging alerts (elastalert) go into ./elastalert/rules/ and rules for monitoring alerts (Alertmanager) go into ./prometheus/rules/. Alertmanager only takes care of the communications part the monitoring alerts, the rules themselves are defined "in" Prometheus.
Both Alertmanager and elastalert can be configured to send their alerts to various outputs. In this suite, Logstash and Slack are set up. The integration with Logstash works out of the box, for adding Slack you will need to insert your webhook url.
The alerts that are sent to Logstash can be checked by looking at the 'logstash-alerts' index in Kibana. Apart from functioning as a first output, sending and storing the alerts to Elasticsearch via Logstash is also neat because it allows us to query them from Grafana and have them imported to its Dashboards as annotations.
The monitoring alerting rules, which are stored in the Prometheus directory, contain a fake alert that should be firing from the beginning and demonstrates the concept. Find it and comment it out to have some peace. Also, there should be logging alerts coming in soon as well, this suite by itself already consists of 10 containers, and something is always complaining. Of course you can also force things by breaking stuff yourself - the blanket_log-level_catch.yaml
rule that's already set up should catch it.
If you're annoyed by non-events repeatedly triggering alerts, throw them in ./logstash/config/31-non-events.conf in order for logstash to silence them by overwritting their log_level upon import.
Unfortunately Grafana doesn't appear to have a fancy query builder for Prometheus as it has for Graphite or InfluxDB, instead one has to plainly type out one's queries.
Alas, when building Grafana graphs/dashboards with Prometheus as a data storage, knowing it's query dsl and metric types is important. This also means, that documentation about using Grafana with an InfluxDB won't help you much, further narrowing down the number of available resources. This is kind of unfortunate.
Here you can find the official documentation for Prometheus on both the query dsl and the metric types:
Information on Prometheus Querying
Information on Prometheus Metric Types
Furthermore, since I couldn't find proper documentation on the metrics cAdvisor and Prometheus/Node-Exporter expose, I decided to just take the info from the /metrics entpoints and bring it into a human-readable format.
Check them here. Combining the information on the exposed metrics themselves with that on Prometheus' query dsl and metric types, you should be good to go to build some beautiful dashboards yourself.
Bad umask: If your umask is bad, and not for example 0022, it could create files/folder with low permissions. Some containers do not start up when that is the case, e.g. Kibana can't read the configd. Setting this umask before downloading the git repo fixes this issue. (pointed out by @riemers)