frtu / log-platform

Full EFK platform for logs and monitoring
Apache License 2.0
3 stars 0 forks source link

log-platform

Platform for observability.

On Tracing using :

On Metrics using :

On Log using EFK platform. EFK stands for :

More on logs unification

Guidelines

Structured logs

Logs used to be a long chain of words and events, requiring a human to read and interpret.

With growing volume of logs, we need to give a structure to logs to allow machine involvement in crunching and organizing data easy to identify & aggregate.

By definition, each event data model depends on business (what you try to achieve), but here are a set of technical fields that are required for every logs to have a context & foster deeper analysis.

Execution context location

Allow to tag every logs sent to EFK with following information :

Field name Definition Example Default value
REGION Physical location US_East_A, CN_SHA, .. SINGLE
ZONE Logical location ZONE_A, ZONE_25, .. SINGLE
MACHINE_ID Specific virtualized ID (like Docker ID) 239411039fee, 8b9a6863c720, .. UNKNOWN
SERVICE_NAME Business component name UserGatewayService, .. UNKNOWN
VERSION_TAG Specific version or tag service-a:0.0.1-SNAPSHOT UNKNOWN

Distributed tracing

Field name Definition Example Default value
TRACE_ID Unique ID per Request 558907019132e7f8, .. [NULL]

StructuredLogger

Logs

Allow to create new dimension in ElasticSearch. Initialize the logger similar with Slf4j LOGGER :

companion object {
  private val structuredLogger = StructuredLogger.create(this::class.java)
}
final static StructuredLogger structuredLogger = StructuredLogger.create("usage");

Then use it for logging String or Integer values :

structuredLogger.info(entry("key1", "value1"), entry("key2", "value2"));
structuredLogger.info(entry("key1", 123), entry("key2", 456));

Gives a JSON log :

{"key1":"value1","key2":"value2"}
{"key1":123,"key2":456}

Stack logs

import static com.github.frtu.logs.core.StructuredLogger.entries;

...

// Create logs entries
val entries = entries(entry("key1", "value1"), entry("key2", "value2"));
...
// Log later
structuredLogger.info(entries);

RpcLogger

Implementation to logs RPC calls, in a generic way :

GraphQL sample

Logging API errors :

rpcLogger.warn(client(),
    method("query"),
    uri("/HeroNameAndFriends"),
    statusCode("123"),
    errorMessage("The invitation has expired, please request a new one")
);

Gives a log :

{
  "kind": "client",
  "method": "Query",
  "uri": "/HeroNameAndFriends",
  "response_code": "123",
  "error_message": "The invitation has expired, please request a new one"
}

Debugging sample

You can also logs very steps for the same request :

// Debugging steps
rpcLogger.debug(client(),
        method(query),
        uri(uri),
        requestId(requestId),
        phase("INIT")
);

// Debugging steps
rpcLogger.debug(client(),
        requestId(requestId),
        phase("SENDING")
);

// Info success or Warn or Error result
rpcLogger.warn(client(),
        requestId(requestId),
        phase("SENT"),
        statusCode("123"),
        errorMessage("The invitation has expired, please request a new one")
);

Full guide for Structured Logging

Full guide for Structured Logging

Adoption

Import using :


<dependency>
    <groupId>com.github.frtu.logs</groupId>
    <artifactId>logger-core</artifactId>
    <version>${frtu-logger.version}</version>
</dependency>

Check the latest version (clickable) :

Configure logback

fluentd in logback-spring.xml

When using logback-spring.xml, you can override any logback ENV with Spring properties using :

<springProperty scope="context" name="SERVICE_NAME" source="application.name"/>

In your property file, just configure fluentd

fluentd.tag=tag
fluentd.label=label
logging.region=localhost
logging.zone=zone
logging.path=target/

file appender in logback-spring.xml

For Production & avoid message loss, it is recommended to use log file + fluentd tail (instead of streaming log) to allow local buffering.

Define log file location with system env :

In your application properties or yaml :

logging.path=target/

Also configure RollingFileAppender using :

<property name="LOG_FILE_MAX_SIZE" value="${LOG_FILE_MAX_SIZE:-5MB}"/>
<property name="LOG_FILE_MAX_HISTORY" value="${LOG_FILE_MAX_HISTORY:-15}"/>
<property name="LOG_FILE_MAX_TOTAL_SIZE" value="${LOG_FILE_MAX_SIZE:-100MB}"/>

Internal running mode for ObjectMapper

By default, ObjectMapper is using a single instance for all Logger. To allow to run on different mode, use the System property 'OBJECTMAPPER_LIFECYCLE_STRATEGY' :

Ex : run your JVM with system property -DOBJECTMAPPER_LIFECYCLE_STRATEGY=THREAD_LOCAL

Log forwarder

Enablement

Import logback configuration from templates folder for :

For troubleshooting, add the import to flush fluentd config into log :

@ComponentScan(basePackages = {"com.github.frtu.logs.infra.fluentd", "..."})

Usage

Just log with logback, activate FLUENT appender on Staging or Production.

a) Core Tracer API

Enablement

If you only need Jaeger io.opentracing.Tracer, just add :

@ComponentScan(basePackages = {"com.github.frtu.logs.tracing.core", "..."})

Usage

You can create a single Span structure :

Span span = tracer.buildSpan("say-hello1").start();
LOGGER.info("hello1");
span.finish();

OR a node from a graph using Scope :

try (Scope scope = tracer.buildSpan("say-hello2").startActive(true)) {
    LOGGER.info("hello2");
}

b) @ExecutionSpan AOP

Enablement

If you want to use @ExecutionSpan to mark a method to create Span, add :

@ComponentScan(basePackages = {"com.github.frtu.logs.tracing", "..."})

And add Spring AOP :

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-aop</artifactId>
</dependency>
<dependency>
    <groupId>org.aspectj</groupId>
    <artifactId>aspectjweaver</artifactId>
</dependency>

OR spring-boot AOP :

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-aop</artifactId>
</dependency>

Basic usage

Just annotate with @ExecutionSpan all the methods you need to create a DAG :

@ExecutionSpan
public String method(){}

You can optionally add a Spring property to get a full classname trace :

trace.full.classname=true

See sample-microservices/service-b or ChangeList

Tag & Log enrichment

To add Tag use :

@ExecutionSpan({
        @Tag(tagName = "key1", tagValue = "value1"),
        @Tag(tagName = "key2", tagValue = "value2")
})
public void method(){}

To add Log use :

@ExecutionSpan
public String method(@ToLog("paramName") String param){}

Manually add Span.log

Use spring @Autowired to get instance of com.github.frtu.logs.tracing.core.TraceHelper :

@Autowired
private TraceHelper traceHelper;

void method() {
    traceHelper.addLog("log1", "value1");
}

Context passing

Dev local

When starting an standalone Main class, also add the following to VM options :

-DREGION=FR -DZONE=A -DSERVICE_NAME=service-a -DMACHINE_ID=982d2ff1686a -DVERSION_TAG=service-a:0.0.1-SNAPSHOT

Also add Jaeger Configuration for :

Inside container & docker-compose

Go to folder /sample-microservices/ and docker-compose up

Metrics

Adoption

Import the JAR

<dependency>
    <groupId>com.github.frtu.logs</groupId>
    <artifactId>logger-metrics</artifactId>
    <version>${frtu-logger.version}</version>
</dependency>

Check the latest version (clickable) :

Spring Annotation

Import Spring Configuration :

@Import({MetricsConfig.class,...})

Spring Properties

# =================================
# Metrics related configurations
# =================================
# https://www.callicoder.com/spring-boot-actuator/
management.endpoints.web.exposure.include=*

management.endpoint.health.show-details=always

management.endpoint.metrics.enabled=true
management.endpoint.prometheus.enabled=true

management.metrics.export.prometheus.enabled=true

Custom measurement

This library provide a class to abtract from direct Counter & Timer :

MeasurementHandle handle = measurementRepository.getMeasurementHandle(operationName, operationDescription, tags);
try {
    return joinPoint.proceed();
} catch (Throwable ex) {
    throw handle.flagError(ex);
} finally {
    handle.close();
}

Infrastructure

More details at GuidelineMonitoring.md

Tools & Tips

Runtime changing the level

Dynamically changing spring-boot application logs LEVEL

management.endpoints.web.exposure.include=loggers
management.endpoint.loggers.enabled=true

Can also use the shell scripts at bash-fwk/lib-dev-spring

Operation tools

Check Tools module.

Infrastructure

Details for development & production env

With Docker Compose (dev local)

URLs

sample-microservices

Monitoring

Distributed Tracing :

Logging :

Tools :

With K8S (production)

Using EFK

Log sources

Simple HTTP source (test)

From Docker instances

Log access from Httpd or Apache

Java log library

fluentd provide a dedicated java logger but for better integration through SLF4J it is recommended to use an adapter to logback :

See also