As a DevOps team member, I want to install Elastic Stack (v7.9.1 by default) so that all application and system logs are collected centrally for searching, visualizing, analyzing and reporting purpose
The architecture used is shown in the table below
High level design | In scope | Not in scope |
---|---|---|
Only beats for log files and metrics are used. All logs and metrics are shipped to elasticsearch directly in this repo. 2x Elasticsearch, 1x apm-server and 1x Kibana are used. | Ingest nodes are not used | |
All containerized custom applications are designed to start with GELF log driver in order to send logs to Elastic Stack | - |
For the full list of free features that are included in the basic license, see: https://www.elastic.co/subscriptions
sudo sysctl -w vm.max_map_count=262144
sudo echo 'vm.max_map_count=262144' >> /etc/sysctl.conf
(to persist reboots)You will need these files to deploy Eleasticsearch, Logstash, Kibana, and Beats. So, first SSH in to the master node of the Docker Swarm cluster allocated to running Elastic Stack and clone this repo by following these commands:
alias git='docker run -it --rm --name git -u $(id -u ${USER}):$(id -g ${USER}) -v $PWD:/git -w /git alpine/git'
(This alias is only required if git is not already installed on your machine. This alias will allow you to clone the repo using a git container)git version
git clone https://github.com/shazChaudhry/docker-elastic.git
cd docker-elastic
export ELASTIC_VERSION=7.9.1
export ELASTICSEARCH_USERNAME=elastic
export ELASTICSEARCH_PASSWORD=changeme
export INITIAL_MASTER_NODES=node1
(See Important discovery and cluster formation settings)export ELASTICSEARCH_HOST=node1
(node1 is default value if you are creating VirtualBox with the provided Vagrantfile. Otherwise, change this value to one of your VMs in the swarm cluster)docker network create --driver overlay --attachable elastic
docker stack deploy --compose-file docker-compose.yml elastic
docker stack services elastic
docker stack ps --no-trunc elastic
(address any error reported at this point)curl -XGET -u ${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD} ${ELASTICSEARCH_HOST}':9200/_cat/health?v&pretty'
(Inspect cluster health status which should be green. It should also show 2x nodes in total assuming you only have two VMs in the cluster)SSH in to the master node of the Docker Swarm cluster allocated to running containerized custom applications and beats. Clone this repo and change directory as per the instructions above.
Execute the following commands to deploy filebeat and metricbeat:
export ELASTIC_VERSION=7.9.1
export ELASTICSEARCH_USERNAME=elastic
export ELASTICSEARCH_PASSWORD=changeme
export ELASTICSEARCH_HOST=node1
(node1 is default value if you are creating VirtualBox with the provided Vagrantfile. Otherwise, change this value to your Elasticsearch host)export KIBANA_HOST=node1
(node1 is default value if you are creating VirtualBox with the provided Vagrantfile. Otherwise, change this value to your Kibana host)docker network create --driver overlay --attachable elastic
docker stack deploy --compose-file filebeat-docker-compose.yml filebeat
(Filebeat starts as a global service on all docker swarm nodes. It is only configured to pick up container logs for all services at '/var/lib/docker/containers/*/*.log
' (container stdout and stderr logs) and forward them to Elasticsearch. These logs will then be available under filebeat index in Kibana. You will need to add additional configurations for other log locations. You may wish to read Docker Reference Architecture: Docker Logging Design and Best Practices)curl -XGET -u ${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD} ${ELASTICSEARCH_HOST}':9200/_cat/indices?v&pretty'
docker stack deploy --compose-file metricbeat-docker-compose.yml metricbeat
(Metricbeat starts as a global service on all docker swarm nodes. It sends system and docker stats from each node to Elasticsearch. These stats will then be available under metricbeat index in Kibana)curl -XGET -u ${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD} ${ELASTICSEARCH_HOST}':9200/_cat/indices?v&pretty'
Wait until all stacks above are started and are up and running and then run jenkins container where filebeat is running:
docker container run -d --rm --name jenkins -p 8080:8080 jenkinsci/blueocean
http://[KIBANA_HOST]
which should show Management tab
elastic
changeme
filebeat-*
@timestamp
Logstash pipeline is configured to accept messages with gelf log driver. Gelf is one of the plugin mentioned in Docker Reference Architecture: Docker Logging Design and Best Practices. Start an application which sends messages with gelf. An example could be as follows:
docker container stop jenkins
export LOGSTASH_HOST=node1
docker container run -d --rm --name jenkins -p 8080:8080 --log-driver=gelf --log-opt gelf-address=udp://${LOGSTASH_HOST}:12201 jenkinsci/blueocean
--log-driver=gelf --log-opt gelf-address=udp://${LOGSTASH_HOST}:12201
_ sends container console logs to Elastic stacklogstash-*
@timestamp
Here is another example:
docker container run --rm -it --log-driver=gelf --log-opt gelf-address=udp://${LOGSTASH_HOST}:12201 alpine ping 8.8.8.8
logstash-*
indexFollow these instructions to build a java app that we will use for APM:
WIP