Asynchronous non-blocking logs handler using Elasticsearch for short-term storage and Amazon S3 for long-term storage.
This project shows a simple way to insert data into ElasticSearch through Aiohttp API. The logs (data) are inserted into ElasticSearch and can be uploaded to a S3 bucket.
Four containers are included into the project:
vagrant up
vagrant ssh
python -m "logs"
py.test
locust --host=http://localhost:8000
Connect to the web interface on port 8089.
This is better to run the Locust client on a separated machine.
pylint logs/
curl http://localhost:8000/api/1/service/1/logs \
-X POST \
-d '{"logs": [{"message": "log message", "level": "low", "category": "my category", "date": "1502304972"}]}' \
-H 'Content-Type: application/json'
curl http://localhost:8000/api/1/service/1/logs/2017-10-15-20-00-00/2017-10-16-15-00-00
In your browser:
http://kibana-container-ip-address:5601
This IP address can be found using docker inspect aiohttp-elasticsearch-s3-logs-handler_kibana
.
The index pattern is data-*
.
This test performs a lot of POST requests for many logs from many TSV files.
python tests/performance/performance_test.py
WARNING: The AWS configuration launches some instances that are not part of the AWS Free tier.
You must have an IAM user with the following permissions:
AmazonEC2FullAccess
,AmazonS3FullAccess
Furthermore, you have to create a key pair file,
and using the name as key_name
below.
Packer must be installed on your machine (https://www.packer.io/downloads.html).
The following commands have to be executed into the build_scripts/
folder.
They build the following AMIs:
packer build \
-var 'access_key=ACCESS_KEY' \
-var 'secret_key=SECRET_KEY' \
-var 'region=REGION' \
packer_backend.json
packer build \
-var 'access_key=ACCESS_KEY' \
-var 'secret_key=SECRET_KEY' \
-var 'region=REGION' \
packer_es.json
packer build \
-var 'access_key=ACCESS_KEY' \
-var 'secret_key=SECRET_KEY' \
-var 'region=REGION' \
packer_kibana.json
packer build \
-var 'access_key=ACCESS_KEY' \
-var 'secret_key=SECRET_KEY' \
-var 'region=REGION' \
packer_worker.json
terraform init
terraform plan \
-var 'access_key=ACCESS_KEY' \
-var 'secret_key=SECRET_KEY' \
-var 'region=REGION' \
-var 'backend_ami_id=SERVICE_AMI_ID' \
-var 'es_ami_id=ES_AMI_ID' \
-var 'kibana_ami_id=KIBANA_AMI_ID' \
-var 'worker_ami_id=WORKER_AMI_ID' \
-var 'key_name=SSH_KEY_NAME'
terraform apply \
-var 'access_key=ACCESS_KEY' \
-var 'secret_key=SECRET_KEY' \
-var 'region=REGION' \
-var 'backend_ami_id=SERVICE_AMI_ID' \
-var 'es_ami_id=ES_AMI_ID' \
-var 'kibana_ami_id=KIBANA_AMI_ID' \
-var 'worker_ami_id=WORKER_AMI_ID' \
-var 'key_name=SSH_KEY_NAME'
Connect using SSH to the worker machine:
ssh admin@worker-elastic-ip -i key.pem
Edit the file /etc/cron.d/snapshot
and set the AWS credentials.
Cron does not need to be restarted.
Schema of the README file is distributed under CreativeCommons license (Attribution-NonCommercial-Share Alike 3.0) because of usage of SoftIcons (https://www.iconfinder.com/iconsets/softicons) by KyoTux. Icons have been integrated into the schema and resized.