Logbook noun, /lɑɡ bʊk/: A book in which measurements from the ship's log are recorded, along with other salient details of the voyage.
Logbook is an extensible Java library to enable complete request and response logging for different client- and server-side technologies. It satisfies a special need by a) allowing web application developers to log any HTTP traffic that an application receives or sends b) in a way that makes it easy to persist and analyze it later. This can be useful for traditional log analysis, meeting audit requirements or investigating individual historic traffic issues.
Logbook is ready to use out of the box for most common setups. Even for uncommon applications and technologies, it should be simple to implement the necessary interfaces to connect a library/framework/etc. to it.
Add the following dependency to your project:
<dependency>
<groupId>org.zalando</groupId>
<artifactId>logbook-core</artifactId>
<version>${logbook.version}</version>
</dependency>
For Spring 5 / Spring Boot 2 backwards compatibility please add the following import:
<dependency>
<groupId>org.zalando</groupId>
<artifactId>logbook-servlet</artifactId>
<version>${logbook.version}</version>
<classifier>javax</classifier>
</dependency>
Additional modules/artifacts of Logbook always share the same version number.
Alternatively, you can import our bill of materials...
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.zalando</groupId>
<artifactId>logbook-bom</artifactId>
<version>${logbook.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
The logbook logger must be configured to trace level in order to log the requests and responses. With Spring Boot 2 (using Logback) this can be accomplished by adding the following line to your application.properties
logging.level.org.zalando.logbook: TRACE
All integrations require an instance of Logbook
which holds all configuration and wires all necessary parts together.
You can either create one using all the defaults:
Logbook logbook = Logbook.create();
or create a customized version using the LogbookBuilder
:
Logbook logbook = Logbook.builder()
.condition(new CustomCondition())
.queryFilter(new CustomQueryFilter())
.pathFilter(new CustomPathFilter())
.headerFilter(new CustomHeaderFilter())
.bodyFilter(new CustomBodyFilter())
.requestFilter(new CustomRequestFilter())
.responseFilter(new CustomResponseFilter())
.sink(new DefaultSink(
new CustomHttpLogFormatter(),
new CustomHttpLogWriter()
))
.build();
Logbook used to have a very rigid strategy how to do request/response logging:
Some of those restrictions could be mitigated with custom HttpLogWriter
implementations, but they were never ideal.
Starting with version 2.0 Logbook now comes with a Strategy pattern
at its core. Make sure you read the documentation of the Strategy
interface to understand the implications.
Logbook comes with some built-in strategies:
Starting with version 3.4.0, Logbook is equipped with a feature called Attribute Extractor. Attributes are basically a list of key/value pairs that can be extracted from request and/or response, and logged with them. The idea was sprouted from issue 381, where a feature was requested to extract the subject claim from JWT tokens in the authorization header.
The AttributeExtractor
interface has two extract
methods: One that can extract attributes from the request only, and
one that has both request and response at its avail. The both return an instance of the HttpAttributes
class, which is
basically a fancy Map<String, Object>
. Notice that since the map values are of type Object
, they should have a
proper toString()
method in order for them to appear in the logs in a meaningful way. Alternatively, log formatters
can work around this by implementing their own serialization logic. For instance, the built-in log formatter
JsonHttpLogFormatter
uses ObjectMapper
to serialize the values.
Here is an example:
final class OriginExtractor implements AttributeExtractor {
@Override
public HttpAttributes extract(final HttpRequest request) {
return HttpAttributes.of("origin", request.getOrigin());
}
}
Logbook must then be created by registering this attribute extractor:
final Logbook logbook = Logbook.builder()
.attributeExtractor(new OriginExtractor())
.build();
This will result in request logs to include something like:
"attributes":{"origin":"LOCAL"}
For more advanced examples, look at the JwtFirstMatchingClaimExtractor
and JwtAllMatchingClaimsExtractor
classes.
The former extracts the first claim matching a list of claim names from the request JWT token.
The latter extracts all claims matching a list of claim names from the request JWT token.
If you require to incorporate multiple AttributeExtractor
s, you can use the class CompositeAttributeExtractor
:
final List<AttributeExtractor> extractors = List.of(
extractor1,
extractor2,
extractor3
);
final Logbook logbook = Logbook.builder()
.attributeExtractor(new CompositeAttributeExtractor(extractors))
.build();
Logbook works in several different phases:
Each phase is represented by one or more interfaces that can be used for customization. Every phase has a sensible default.
Logging HTTP messages and including their bodies is a rather expensive task, so it makes a lot of sense to disable logging for certain requests. A common use case would be to ignore health check requests from a load balancer, or any request to management endpoints typically issued by developers.
Defining a condition is as easy as writing a special Predicate
that decides whether a request (and its corresponding response) should be logged or not. Alternatively you can use and combine
predefined predicates:
Logbook logbook = Logbook.builder()
.condition(exclude(
requestTo("/health"),
requestTo("/admin/**"),
contentType("application/octet-stream"),
header("X-Secret", newHashSet("1", "true")::contains)))
.build();
Exclusion patterns, e.g. /admin/**
, are loosely following Ant's style of path patterns
without taking the the query string of the URL into consideration.
The goal of Filtering is to prevent the logging of certain sensitive parts of HTTP requests and responses. This usually includes the Authorization header, but could also apply to certain plaintext query or form parameters — e.g. password.
Logbook supports different types of filters:
Type | Operates on | Applies to | Default |
---|---|---|---|
QueryFilter |
Query string | request | access_token |
PathFilter |
Path | request | n/a |
HeaderFilter |
Header (single key-value pair) | both | Authorization |
BodyFilter |
Content-Type and body | both | json: access_token and refresh_token form: client_secret and password |
RequestFilter |
HttpRequest |
request | Replace binary, multipart and stream bodies. |
ResponseFilter |
HttpResponse |
response | Replace binary, multipart and stream bodies. |
QueryFilter
, PathFilter
, HeaderFilter
and BodyFilter
are relatively high-level and should cover all needs in ~90% of all
cases. For more complicated setups one should fallback to the low-level variants, i.e. RequestFilter
and ResponseFilter
respectively (in conjunction with ForwardingHttpRequest
/ForwardingHttpResponse
).
You can configure filters like this:
import static org.zalando.logbook.core.HeaderFilters.authorization;
import static org.zalando.logbook.core.HeaderFilters.eachHeader;
import static org.zalando.logbook.core.QueryFilters.accessToken;
import static org.zalando.logbook.core.QueryFilters.replaceQuery;
Logbook logbook = Logbook.builder()
.requestFilter(RequestFilters.replaceBody(message -> contentType("audio/*").test(message) ? "mmh mmh mmh mmh" : null))
.responseFilter(ResponseFilters.replaceBody(message -> contentType("*/*-stream").test(message) ? "It just keeps going and going..." : null))
.queryFilter(accessToken())
.queryFilter(replaceQuery("password", "<secret>"))
.headerFilter(authorization())
.headerFilter(eachHeader("X-Secret"::equalsIgnoreCase, "<secret>"))
.build();
You can configure as many filters as you want - they will run consecutively.
You can apply JSON Path filtering to JSON bodies. Here are some examples:
import static org.zalando.logbook.json.JsonPathBodyFilters.jsonPath;
import static java.util.regex.Pattern.compile;
Logbook logbook = Logbook.builder()
.bodyFilter(jsonPath("$.password").delete())
.bodyFilter(jsonPath("$.active").replace("unknown"))
.bodyFilter(jsonPath("$.address").replace("X"))
.bodyFilter(jsonPath("$.name").replace(compile("^(\\w).+"), "$1."))
.bodyFilter(jsonPath("$.friends.*.name").replace(compile("^(\\w).+"), "$1."))
.bodyFilter(jsonPath("$.grades.*").replace(1.0))
.build();
Take a look at the following example, before and after filtering was applied:
Logbook uses a correlation id to correlate requests and responses. This allows match-related requests and responses that would usually be located in different places in the log file.
If the default implementation of the correlation id is insufficient for your use case, you may provide a custom implementation:
Logbook logbook = Logbook.builder()
.correlationId(new CustomCorrelationId())
.build();
Formatting defines how requests and responses will be transformed to strings basically. Formatters do not specify where requests and responses are logged to — writers do that work.
Logbook comes with two different default formatters: HTTP and JSON.
HTTP is the default formatting style, provided by the DefaultHttpLogFormatter
. It is primarily designed to be used for local development and debugging, not for production use. This is because it’s
not as readily machine-readable as JSON.
Incoming Request: 2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b
GET http://example.org/test HTTP/1.1
Accept: application/json
Host: localhost
Content-Type: text/plain
Hello world!
Outgoing Response: 2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b
Duration: 25 ms
HTTP/1.1 200
Content-Type: application/json
{"value":"Hello world!"}
JSON is an alternative formatting style, provided by the JsonHttpLogFormatter
. Unlike HTTP, it is primarily designed for production use — parsers and log consumers can easily consume it.
Requires the following dependency:
<dependency>
<groupId>org.zalando</groupId>
<artifactId>logbook-json</artifactId>
</dependency>
{
"origin": "remote",
"type": "request",
"correlation": "2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b",
"protocol": "HTTP/1.1",
"sender": "127.0.0.1",
"method": "GET",
"uri": "http://example.org/test",
"host": "example.org",
"path": "/test",
"scheme": "http",
"port": null,
"headers": {
"Accept": ["application/json"],
"Content-Type": ["text/plain"]
},
"body": "Hello world!"
}
{
"origin": "local",
"type": "response",
"correlation": "2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b",
"duration": 25,
"protocol": "HTTP/1.1",
"status": 200,
"headers": {
"Content-Type": ["text/plain"]
},
"body": "Hello world!"
}
Note: Bodies of type application/json
(and application/*+json
) will be inlined into the resulting JSON tree. I.e.,
a JSON response body will not be escaped and represented as a string:
{
"origin": "local",
"type": "response",
"correlation": "2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b",
"duration": 25,
"protocol": "HTTP/1.1",
"status": 200,
"headers": {
"Content-Type": ["application/json"]
},
"body": {
"greeting": "Hello, world!"
}
}
The Common Log Format (CLF) is a standardized text file format used by web servers when generating server log files. The format is supported via
the CommonsLogFormatSink
:
185.85.220.253 - - [02/Aug/2019:08:16:41 0000] "GET /search?q=zalando HTTP/1.1" 200 -
The Extended Log Format (ELF) is a standardised text file format, like Common Log Format (CLF), that is used by web servers when generating log
files, but ELF files provide more information and flexibility. The format is supported via the ExtendedLogFormatSink
.
Also see W3C document.
Default fields:
date time c-ip s-dns cs-method cs-uri-stem cs-uri-query sc-status sc-bytes cs-bytes time-taken cs-protocol cs(User-Agent) cs(Cookie) cs(Referrer)
Default log output example:
2019-08-02 08:16:41 185.85.220.253 localhost POST /search ?q=zalando 200 21 20 0.125 HTTP/1.1 "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0" "name=value" "https://example.com/page?q=123"
Users may override default fields with their custom fields through the constructor of ExtendedLogFormatSink
:
new ExtendedLogFormatSink(new DefaultHttpLogWriter(),"date time cs(Custom-Request-Header) sc(Custom-Response-Header)")
For Http header fields: cs(Any-Header)
and sc(Any-Header)
, users could specify any headers they want to extract from the request.
Other supported fields are listed in the value of ExtendedLogFormatSink.Field
, which can be put in the custom field expression.
cURL is an alternative formatting style, provided by the CurlHttpLogFormatter
which will render requests as
executable cURL
commands. Unlike JSON, it is primarily designed for humans.
curl -v -X GET 'http://localhost/test' -H 'Accept: application/json'
See HTTP or provide own fallback for responses:
new CurlHttpLogFormatter(new JsonHttpLogFormatter());
Splunk is an alternative formatting style, provided by the SplunkHttpLogFormatter
which will render
requests and response as key-value pairs.
origin=remote type=request correlation=2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b protocol=HTTP/1.1 sender=127.0.0.1 method=POST uri=http://example.org/test host=example.org scheme=http port=null path=/test headers={Accept=[application/json], Content-Type=[text/plain]} body=Hello world!
origin=local type=response correlation=2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b duration=25 protocol=HTTP/1.1 status=200 headers={Content-Type=[text/plain]} body=Hello world!
Writing defines where formatted requests and responses are written to. Logbook comes with three implementations: Logger, Stream and Chunking.
By default, requests and responses are logged with an slf4j logger that uses the org.zalando.logbook.Logbook
category and the log level trace
. This can be customized:
Logbook logbook = Logbook.builder()
.sink(new DefaultSink(
new DefaultHttpLogFormatter(),
new DefaultHttpLogWriter()
))
.build();
An alternative implementation is to log requests and responses to a PrintStream
, e.g. System.out
or System.err
. This is usually a bad choice for running in production, but can sometimes be
useful for short-term local development and/or investigation.
Logbook logbook = Logbook.builder()
.sink(new DefaultSink(
new DefaultHttpLogFormatter(),
new StreamHttpLogWriter(System.err)
))
.build();
The ChunkingSink
will split long messages into smaller chunks and will write them individually while delegating to another sink:
Logbook logbook = Logbook.builder()
.sink(new ChunkingSink(sink, 1000))
.build();
The combination of HttpLogFormatter
and HttpLogWriter
suits most use cases well, but it has limitations.
Implementing the Sink
interface directly allows for more sophisticated use cases, e.g. writing requests/responses
to a structured persistent storage like a database.
Multiple sinks can be combined into one using the CompositeSink
.
You’ll have to register the LogbookFilter
as a Filter
in your filter chain — either in your web.xml
file (please note that the xml approach will use all the defaults and is not configurable):
<filter>
<filter-name>LogbookFilter</filter-name>
<filter-class>org.zalando.logbook.servlet.LogbookFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>LogbookFilter</filter-name>
<url-pattern>/*</url-pattern>
<dispatcher>REQUEST</dispatcher>
<dispatcher>ASYNC</dispatcher>
</filter-mapping>
or programmatically, via the ServletContext
:
context.addFilter("LogbookFilter", new LogbookFilter(logbook))
.addMappingForUrlPatterns(EnumSet.of(REQUEST, ASYNC), true, "/*");
Beware: The ERROR
dispatch is not supported. You're strongly advised to produce error responses within the
REQUEST
or ASNYC
dispatch.
The LogbookFilter
will, by default, treat requests with a application/x-www-form-urlencoded
body not different from
any other request, i.e you will see the request body in the logs. The downside of this approach is that you won't be
able to use any of the HttpServletRequest.getParameter*(..)
methods. See issue #94 for some more
details.
As of Logbook 1.5.0, you can now specify one of three strategies that define how Logbook deals with this situation by
using the logbook.servlet.form-request
system property:
Value | Pros | Cons |
---|---|---|
body (default) |
Body is logged | Downstream code can *not use `getParameter()`** |
parameter |
Body is logged (but it's reconstructed from parameters) | Downstream code can not use getInputStream() |
off |
Downstream code can decide whether to use getInputStream() or getParameter*() |
Body is not logged |
Secure applications usually need a slightly different setup. You should generally avoid logging unauthorized requests, especially the body, because it quickly allows attackers to flood your logfile — and, consequently, your precious disk space. Assuming that your application handles authorization inside another filter, you have two choices:
You can easily achieve the former setup by placing the LogbookFilter
after your security filter. The latter is a little bit more sophisticated. You’ll need two LogbookFilter
instances — one before
your security filter, and one after it:
context.addFilter("SecureLogbookFilter", new SecureLogbookFilter(logbook))
.addMappingForUrlPatterns(EnumSet.of(REQUEST, ASYNC), true, "/*");
context.addFilter("securityFilter", new SecurityFilter())
.addMappingForUrlPatterns(EnumSet.of(REQUEST), true, "/*");
context.addFilter("LogbookFilter", new LogbookFilter(logbook))
.addMappingForUrlPatterns(EnumSet.of(REQUEST, ASYNC), true, "/*");
The first logbook filter will log unauthorized requests only. The second filter will log authorized requests, as always.
The logbook-httpclient
module contains both an HttpRequestInterceptor
and an HttpResponseInterceptor
to use with the HttpClient
:
CloseableHttpClient client = HttpClientBuilder.create()
.addInterceptorFirst(new LogbookHttpRequestInterceptor(logbook))
.addInterceptorFirst(new LogbookHttpResponseInterceptor())
.build();
Since the LogbookHttpResponseInterceptor
is incompatible with the HttpAsyncClient
there is another way to log responses:
CloseableHttpAsyncClient client = HttpAsyncClientBuilder.create()
.addInterceptorFirst(new LogbookHttpRequestInterceptor(logbook))
.build();
// and then wrap your response consumer
client.execute(producer, new LogbookHttpAsyncResponseConsumer<>(consumer), callback)
The logbook-httpclient5
module contains an ExecHandler
to use with the HttpClient
:
CloseableHttpClient client = HttpClientBuilder.create()
.addExecInterceptorFirst("Logbook", new LogbookHttpExecHandler(logbook))
.build();
The Handler should be added first, such that a compression is performed after logging and decompression is performed before logging.
To avoid a breaking change, there is also an HttpRequestInterceptor
and an HttpResponseInterceptor
to use with the HttpClient
, which works fine as long as compression (or other ExecHandlers) is
not used:
CloseableHttpClient client = HttpClientBuilder.create()
.addRequestInterceptorFirst(new LogbookHttpRequestInterceptor(logbook))
.addResponseInterceptorFirst(new LogbookHttpResponseInterceptor())
.build();
Since the LogbookHttpResponseInterceptor
is incompatible with the HttpAsyncClient
there is another way to log responses:
CloseableHttpAsyncClient client = HttpAsyncClientBuilder.create()
.addRequestInterceptorFirst(new LogbookHttpRequestInterceptor(logbook))
.build();
// and then wrap your response consumer
client.execute(producer, new LogbookHttpAsyncResponseConsumer<>(consumer), callback)
[!NOTE] Support for JAX-RS 2.x
JAX-RS 2.x (legacy) support was dropped in Logbook 3.0 to 3.6.
As of Logbook 3.7, JAX-RS 2.x support is back.
However, you need to add the
javax
classifier to use the proper Logbook module:<dependency> <groupId>org.zalando</groupId> <artifactId>logbook-jaxrs</artifactId> <version>${logbook.version}</version> <classifier>javax</classifier> </dependency>
You should also make sure that the following dependencies are on your classpath. By default,
logbook-jaxrs
importsjersey-client 3.x
, which is not compatible with JAX-RS 2.x:
The logbook-jaxrs
module contains:
A LogbookClientFilter
to be used for applications making HTTP requests
client.register(new LogbookClientFilter(logbook));
A LogbookServerFilter
for be used with HTTP servers
resourceConfig.register(new LogbookServerFilter(logbook));
The logbook-jdkserver
module provides support for
JDK HTTP server
and contains:
A LogbookFilter
to be used with the builtin server
httpServer.createContext(path,handler).getFilters().add(new LogbookFilter(logbook))
The logbook-netty
module contains:
A LogbookClientHandler
to be used with an HttpClient
:
HttpClient httpClient =
HttpClient.create()
.doOnConnected(
(connection -> connection.addHandlerLast(new LogbookClientHandler(logbook)))
);
A LogbookServerHandler
for use used with an HttpServer
:
HttpServer httpServer =
HttpServer.create()
.doOnConnection(
connection -> connection.addHandlerLast(new LogbookServerHandler(logbook))
);
Users of Spring WebFlux can pick any of the following options:
NettyWebServer
(passing an HttpServer
)NettyServerCustomizer
ReactorClientHttpConnector
(passing an HttpClient
)WebClientCustomizer
logbook-spring-webflux
Users of Micronaut can follow the official docs on how to integrate Logbook with Micronaut.
:warning: Even though Quarkus and Vert.x use Netty under the hood, unfortunately neither of them allows accessing or customizing it (yet).
The logbook-okhttp2
module contains an Interceptor
to use with version 2.x of the OkHttpClient
:
OkHttpClient client = new OkHttpClient();
client.networkInterceptors().add(new LogbookInterceptor(logbook));
If you're expecting gzip-compressed responses you need to register our GzipInterceptor
in addition.
The transparent gzip support built into OkHttp will run after any network interceptor which forces
logbook to log compressed binary responses.
OkHttpClient client = new OkHttpClient();
client.networkInterceptors().add(new LogbookInterceptor(logbook));
client.networkInterceptors().add(new GzipInterceptor());
The logbook-okhttp
module contains an Interceptor
to use with version 3.x of the OkHttpClient
:
OkHttpClient client = new OkHttpClient.Builder()
.addNetworkInterceptor(new LogbookInterceptor(logbook))
.build();
If you're expecting gzip-compressed responses you need to register our GzipInterceptor
in addition.
The transparent gzip support built into OkHttp will run after any network interceptor which forces
logbook to log compressed binary responses.
OkHttpClient client = new OkHttpClient.Builder()
.addNetworkInterceptor(new LogbookInterceptor(logbook))
.addNetworkInterceptor(new GzipInterceptor())
.build();
The logbook-ktor-client
module contains:
A LogbookClient
to be used with an HttpClient
:
private val client = HttpClient(CIO) {
install(LogbookClient) {
logbook = logbook
}
}
The logbook-ktor-server
module contains:
A LogbookServer
to be used with an Application
:
private val server = embeddedServer(CIO) {
install(LogbookServer) {
logbook = logbook
}
}
Alternatively, you can use logbook-ktor
, which ships both logbook-ktor-client
and logbook-ktor-server
modules.
The logbook-spring
module contains a ClientHttpRequestInterceptor
to use with RestTemplate
:
LogbookClientHttpRequestInterceptor interceptor = new LogbookClientHttpRequestInterceptor(logbook);
RestTemplate restTemplate = new RestTemplate();
restTemplate.getInterceptors().add(interceptor);
Logbook comes with a convenient auto configuration for Spring Boot users. It sets up all of the following parts automatically with sensible defaults:
Instead of declaring a dependency to logbook-core
declare one to the Spring Boot Starter:
<dependency>
<groupId>org.zalando</groupId>
<artifactId>logbook-spring-boot-starter</artifactId>
<version>${logbook.version}</version>
</dependency>
Every bean can be overridden and customized if needed, e.g. like this:
@Bean
public BodyFilter bodyFilter() {
return merge(
defaultValue(),
replaceJsonStringProperty(singleton("secret"), "XXX"));
}
Please refer to LogbookAutoConfiguration
or the following table to see a list of possible integration points:
Type | Name | Default |
---|---|---|
FilterRegistrationBean |
secureLogbookFilter |
Based on LogbookFilter |
FilterRegistrationBean |
logbookFilter |
Based on LogbookFilter |
Logbook |
Based on condition, filters, formatter and writer | |
Predicate<HttpRequest> |
requestCondition |
No filter; is later combined with logbook.exclude and logbook.exclude |
HeaderFilter |
Based on logbook.obfuscate.headers |
|
PathFilter |
Based on logbook.obfuscate.paths |
|
QueryFilter |
Based on logbook.obfuscate.parameters |
|
BodyFilter |
BodyFilters.defaultValue() , see filtering |
|
RequestFilter |
RequestFilters.defaultValue() , see filtering |
|
ResponseFilter |
ResponseFilters.defaultValue() , see filtering |
|
Strategy |
DefaultStrategy |
|
AttributeExtractor |
NoOpAttributeExtractor |
|
Sink |
DefaultSink |
|
HttpLogFormatter |
JsonHttpLogFormatter |
|
HttpLogWriter |
DefaultHttpLogWriter |
Multiple filters are merged into one.
logbook-spring
Some classes from logbook-spring
are included in the auto configuration.
You can autowire LogbookClientHttpRequestInterceptor
with code like:
private final RestTemplate restTemplate;
MyClient(RestTemplateBuilder builder, LogbookClientHttpRequestInterceptor interceptor){
this.restTemplate = builder
.additionalInterceptors(interceptor)
.build();
}
The following tables show the available configuration (sorted alphabetically):
Configuration | Description | Default |
---|---|---|
logbook.attribute-extractors |
List of AttributeExtractors, including configurations such as type (currently JwtFirstMatchingClaimExtractor or JwtAllMatchingClaimsExtractor ), claim-names and claim-key . |
[] |
logbook.filter.enabled |
Enable the LogbookFilter |
true |
logbook.filter.form-request-mode |
Determines how form requests are handled | body |
logbook.filters.body.default-enabled |
Enables/disables default body filters that are collected by java.util.ServiceLoader | true |
logbook.format.style |
Formatting style (http , json , curl or splunk ) |
json |
logbook.httpclient.decompress-response |
Enables/disables additional decompression process for HttpClient with gzip encoded body (to logging purposes only). This means extra decompression and possible performance impact. | false (disabled) |
logbook.minimum-status |
Minimum status to enable logging (status-at-least and body-only-if-status-at-least ) |
400 |
logbook.obfuscate.headers |
List of header names that need obfuscation | [Authorization] |
logbook.obfuscate.json-body-fields |
List of JSON body fields to be obfuscated | [] |
logbook.obfuscate.parameters |
List of parameter names that need obfuscation | [access_token] |
logbook.obfuscate.paths |
List of paths that need obfuscation. Check Filtering for syntax. | [] |
logbook.obfuscate.replacement |
A value to be used instead of an obfuscated one | XXX |
logbook.predicate.include |
Include only certain paths and methods (if defined) | [] |
logbook.predicate.exclude |
Exclude certain paths and methods (overrides logbook.predicate.include ) |
[] |
logbook.secure-filter.enabled |
Enable the SecureLogbookFilter |
true |
logbook.strategy |
Strategy (default , status-at-least , body-only-if-status-at-least , without-body ) |
default |
logbook.write.chunk-size |
Splits log lines into smaller chunks of size up-to chunk-size . |
0 (disabled) |
logbook.write.max-body-size |
Truncates the body up to max-body-size characters and appends ... . :warning: Logbook will still buffer the full body, if the request is eligible for logging, regardless of the logbook.write.max-body-size value |
-1 (disabled) |
logbook:
predicate:
include:
- path: /api/**
methods:
- GET
- POST
- path: /actuator/**
exclude:
- path: /actuator/health
- path: /api/admin/**
methods:
- POST
filter.enabled: true
secure-filter.enabled: true
format.style: http
strategy: body-only-if-status-at-least
minimum-status: 400
obfuscate:
headers:
- Authorization
- X-Secret
parameters:
- access_token
- password
write:
chunk-size: 1000
attribute-extractors:
- type: JwtFirstMatchingClaimExtractor
claim-names: [ "sub", "subject" ]
claim-key: Principal
- type: JwtAllMatchingClaimsExtractor
claim-names: [ "sub", "iat" ]
For basic Logback configuraton
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
configure Logbook with a LogstashLogbackSink
HttpLogFormatter formatter = new JsonHttpLogFormatter();
LogstashLogbackSink sink = new LogstashLogbackSink(formatter);
for outputs like
{
"@timestamp" : "2019-03-08T09:37:46.239+01:00",
"@version" : "1",
"message" : "GET http://localhost/test?limit=1",
"logger_name" : "org.zalando.logbook.Logbook",
"thread_name" : "main",
"level" : "TRACE",
"level_value" : 5000,
"http" : {
// logbook request/response contents
}
}
You have the flexibility to customize the default logging level by initializing LogstashLogbackSink
with a specific level. For instance:
LogstashLogbackSink sink = new LogstashLogbackSink(formatter, Level.INFO);
getWriter
and/or getParameter*()
. See Servlet for more details.ERROR
dispatch. You're strongly encouraged to not use it to produce error responses.If you have questions, concerns, bug reports, etc., please file an issue in this repository's Issue Tracker.
To contribute, simply make a pull request and add a brief description (1-2 sentences) of your addition or change. For more details, check the contribution guidelines.
Grand Turk, a replica of a three-masted 6th rate frigate from Nelson's days - logbook and charts by JoJan is licensed under a Creative Commons (Attribution-Share Alike 3.0 Unported).