CinePik / common

Common files and configs for all CinePik microservices.
MIT License
0 stars 0 forks source link

Centralized logs #6

Open lzukanovic opened 10 months ago

lzukanovic commented 10 months ago
  • Dnevniške datoteke (logs) vaših mikrostoritev shranjujte v sistem za centralizirano beleženje dnevnikov.
    • V vaše mikrostoritve lahko dodate orodje za beleženje dnevnikov, ki bo dnevnike pošiljalo v sistem za centralizirano beleženje dnevnikov.
    • Uporabite lahko orodja, ki na Kubernetesu berejo izpise podov in jih posredujejo v sistem za zbiranje dnevniških zapisov.
  • Uporabite lahko trial račun na logit.io.
    • Za uporabo z Log4j2 uporabite appender tipa UDP.
  • Storitev naj vsakemu dnevniškemu zapisu doda tudi kontekstne podatke (ime storitve, verzija, okolje...). V Log4j2 lahko za ta namen implementirate interceptor. Dodajte tudi unikaten identifikator zahtevka, ki naj enolično označuje en zahtevek, ki se lahko izvede na več mikrostoritvah.
  • Vaše mikrostoritve naj beležijo vse vstope in izstope v metode posameznih končnih točk REST.
  • V orodju za pregled dnevnikov pripravite vsaj tri zanimive poizvedbe po dnevnikih (npr. izpis dnevnikov določene mikrostoritve, izpis vseh vstopov v določeno metodo, ...)
    • Primer: marker.name: ENTRY || marker.name: EXIT
lzukanovic commented 10 months ago

This is where most of the work for logging will be done. Some changes can be observed in the feature/logging branch. Currently the whole logging stack (elasticsearch, kibana, metricbeat - optional since we have Prometheus, filebeat and logstash) has been implemented to work in a local Docker environment using docker-compose.logging.yml.

For this assignment we are merely focused on elasticsearch, kibana and logstash which is the service that can accept log outputs to stdout from other containers (and additionally can read log files that are placed in a specific directory). The other two (metricbeat and filebeat) are probably not necessary at this point.

The issue that arrises is within the config (currently just docker-compose.yml) of each individual application. For each container (e.g. app, and db) we need to define the logging configuration to specify what the container does with logs (what driver/protocol to use, where to send its logs and what tag should it use for metadata).

Here is an example:

services:
  app:
      container_name: cinepik-catalog
      ...
      networks:
        - cinepik-network
      logging:
        driver: gelf
        options:
          gelf-address: udp://host.docker.internal:12201
          tag: cinepik-catalog-app

The issue I encountered when setting up everything for docker is that this container, even though it is on the same network as the required logstash container (and all other logging containers) it was unable to reach the container using the following host names: localhost, container-name, etc. The only thing that successfully managed to reach the container was using the host.docker.internal hostname. Something about that while both containers are in the same network, the docker daemon that intercepts the logging traffic is not and that is why it doesn't work? Hopefully this is something that will be resolved when deployed to Kubernetes.

To summarise: The logging part pertaining to the common repo is working, it just needs all of the k8s resource files. What needs to be fixed is the sending logs functionality on each of the app repos.

lzukanovic commented 10 months ago

This is the tutorial I followed to setup logging locally: Getting started with the Elastic Stack and Docker Compose: Part 1

Here is a tutorial I found on how to deploy the elastic stack to Kubernetes (using Help charts): Deploy the Elastic Stack on Kubernetes

lzukanovic commented 10 months ago

Also a helpful tool to automatically generate k8s resource files is kompose. Here is a example command to convert the docker-compose.logging.yml file:

kompose convert -f docker-compose.logging.yml -n logging -o k8s/