Platform for observability.
On Tracing using :
On Metrics using :
On Log using EFK platform. EFK stands for :
More on logs unification
Logs used to be a long chain of words and events, requiring a human to read and interpret.
With growing volume of logs, we need to give a structure to logs to allow machine involvement in crunching and organizing data easy to identify & aggregate.
By definition, each event data model depends on business (what you try to achieve), but here are a set of technical fields that are required for every logs to have a context & foster deeper analysis.
Allow to tag every logs sent to EFK with following information :
Field name | Definition | Example | Default value |
---|---|---|---|
REGION | Physical location | US_East_A, CN_SHA, .. | SINGLE |
ZONE | Logical location | ZONE_A, ZONE_25, .. | SINGLE |
MACHINE_ID | Specific virtualized ID (like Docker ID) | 239411039fee, 8b9a6863c720, .. | UNKNOWN |
SERVICE_NAME | Business component name | UserGatewayService, .. | UNKNOWN |
VERSION_TAG | Specific version or tag | service-a:0.0.1-SNAPSHOT | UNKNOWN |
Field name | Definition | Example | Default value |
---|---|---|---|
TRACE_ID | Unique ID per Request | 558907019132e7f8, .. | [NULL] |
Allow to create new dimension in ElasticSearch. Initialize the logger similar with Slf4j LOGGER :
companion object {
private val structuredLogger = StructuredLogger.create(this::class.java)
}
final static StructuredLogger structuredLogger = StructuredLogger.create("usage");
Then use it for logging String or Integer values :
structuredLogger.info(entry("key1", "value1"), entry("key2", "value2"));
structuredLogger.info(entry("key1", 123), entry("key2", 456));
Gives a JSON log :
{"key1":"value1","key2":"value2"}
{"key1":123,"key2":456}
Stack logs
import static com.github.frtu.logs.core.StructuredLogger.entries;
...
// Create logs entries
val entries = entries(entry("key1", "value1"), entry("key2", "value2"));
...
// Log later
structuredLogger.info(entries);
Implementation to logs RPC calls, in a generic way :
Logging API errors :
rpcLogger.warn(client(),
method("query"),
uri("/HeroNameAndFriends"),
statusCode("123"),
errorMessage("The invitation has expired, please request a new one")
);
Gives a log :
{
"kind": "client",
"method": "Query",
"uri": "/HeroNameAndFriends",
"response_code": "123",
"error_message": "The invitation has expired, please request a new one"
}
You can also logs very steps for the same request :
// Debugging steps
rpcLogger.debug(client(),
method(query),
uri(uri),
requestId(requestId),
phase("INIT")
);
// Debugging steps
rpcLogger.debug(client(),
requestId(requestId),
phase("SENDING")
);
// Info success or Warn or Error result
rpcLogger.warn(client(),
requestId(requestId),
phase("SENT"),
statusCode("123"),
errorMessage("The invitation has expired, please request a new one")
);
Full guide for Structured Logging
Import using :
<dependency>
<groupId>com.github.frtu.logs</groupId>
<artifactId>logger-core</artifactId>
<version>${frtu-logger.version}</version>
</dependency>
Check the latest version (clickable) :
logger-core/src/main/resources/templates/
src/main/resources/
folderWhen using logback-spring.xml, you can override any logback ENV with Spring properties using :
<springProperty scope="context" name="SERVICE_NAME" source="application.name"/>
In your property file, just configure fluentd
fluentd.tag=tag
fluentd.label=label
logging.region=localhost
logging.zone=zone
logging.path=target/
For Production & avoid message loss, it is recommended to use log file + fluentd tail (instead of streaming log) to allow local buffering.
Define log file location with system env :
$LOG_PATH/$SERVICE_NAME.log
LOG_FILE_LOCATION
In your application properties or yaml :
logging.path=target/
Also configure RollingFileAppender using :
<property name="LOG_FILE_MAX_SIZE" value="${LOG_FILE_MAX_SIZE:-5MB}"/>
<property name="LOG_FILE_MAX_HISTORY" value="${LOG_FILE_MAX_HISTORY:-15}"/>
<property name="LOG_FILE_MAX_TOTAL_SIZE" value="${LOG_FILE_MAX_SIZE:-100MB}"/>
By default, ObjectMapper is using a single instance for all Logger. To allow to run on different mode, use the System property 'OBJECTMAPPER_LIFECYCLE_STRATEGY' :
Ex : run your JVM with system property -DOBJECTMAPPER_LIFECYCLE_STRATEGY=THREAD_LOCAL
Import logback configuration from templates folder for :
For troubleshooting, add the import to flush fluentd config into log :
@ComponentScan(basePackages = {"com.github.frtu.logs.infra.fluentd", "..."})
Just log with logback, activate FLUENT appender on Staging or Production.
If you only need Jaeger io.opentracing.Tracer, just add :
@ComponentScan(basePackages = {"com.github.frtu.logs.tracing.core", "..."})
You can create a single Span structure :
Span span = tracer.buildSpan("say-hello1").start();
LOGGER.info("hello1");
span.finish();
OR a node from a graph using Scope :
try (Scope scope = tracer.buildSpan("say-hello2").startActive(true)) {
LOGGER.info("hello2");
}
If you want to use @ExecutionSpan to mark a method to create Span, add :
@ComponentScan(basePackages = {"com.github.frtu.logs.tracing", "..."})
And add Spring AOP :
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-aop</artifactId>
</dependency>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjweaver</artifactId>
</dependency>
OR spring-boot AOP :
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
Just annotate with @ExecutionSpan all the methods you need to create a DAG :
@ExecutionSpan
public String method(){}
You can optionally add a Spring property to get a full classname trace :
trace.full.classname=true
See sample-microservices/service-b or ChangeList
To add Tag use :
@ExecutionSpan({
@Tag(tagName = "key1", tagValue = "value1"),
@Tag(tagName = "key2", tagValue = "value2")
})
public void method(){}
To add Log use :
@ExecutionSpan
public String method(@ToLog("paramName") String param){}
Use spring @Autowired to get instance of com.github.frtu.logs.tracing.core.TraceHelper :
@Autowired
private TraceHelper traceHelper;
void method() {
traceHelper.addLog("log1", "value1");
}
When starting an standalone Main class, also add the following to VM options :
-DREGION=FR -DZONE=A -DSERVICE_NAME=service-a -DMACHINE_ID=982d2ff1686a -DVERSION_TAG=service-a:0.0.1-SNAPSHOT
Also add Jaeger Configuration for :
-DJAEGER_ENDPOINT=http://localhost:16686/api/traces
-DJAEGER_AGENT_HOST=localhost -DJAEGER_AGENT_PORT=6831
Go to folder /sample-microservices/ and docker-compose up
<dependency>
<groupId>com.github.frtu.logs</groupId>
<artifactId>logger-metrics</artifactId>
<version>${frtu-logger.version}</version>
</dependency>
Check the latest version (clickable) :
Import Spring Configuration :
@Import({MetricsConfig.class,...})
# =================================
# Metrics related configurations
# =================================
# https://www.callicoder.com/spring-boot-actuator/
management.endpoints.web.exposure.include=*
management.endpoint.health.show-details=always
management.endpoint.metrics.enabled=true
management.endpoint.prometheus.enabled=true
management.metrics.export.prometheus.enabled=true
This library provide a class to abtract from direct Counter & Timer :
MeasurementHandle handle = measurementRepository.getMeasurementHandle(operationName, operationDescription, tags);
try {
return joinPoint.proceed();
} catch (Throwable ex) {
throw handle.flagError(ex);
} finally {
handle.close();
}
More details at GuidelineMonitoring.md
Dynamically changing spring-boot application logs LEVEL
management.endpoints.web.exposure.include=loggers
management.endpoint.loggers.enabled=true
Can also use the shell scripts at bash-fwk/lib-dev-spring
Check Tools module.
Details for development & production env
sample-microservices
Monitoring
Distributed Tracing :
Logging :
Tools :
fluentd provide a dedicated java logger but for better integration through SLF4J it is recommended to use an adapter to logback :