A deeper look into the code
This chapter details all specific stuff coded or configured in Woofer related to logging.
Featured logging tools
Woofer uses the following libraries:
- SLF4J and Logback as the logging facade and implementation (the default with Spring Boot).
- logstash-logback-encoder, a great Logback add-on that enables shipping logs directly to Logstash in JSON format over TCP or UDP.
- Spring Cloud Sleuth for distributed tracing & logs correlation.
- our orange-mathoms-logging library that brings additional useful stuff.
Distributed tracing & logs correlation
Especially in a microservices architecture, a request may go through several tiers (apache, j2ee server, bdd...) and it's not a trivial task to understand and follow the callflow and spot the root cause of an error.
That's why it is so important to have a way of correlating logs produced from different tiers, but related to the same treatment.
This is demonstrated in our Incident Analysis section.
Distributed tracing feature in Woofer is brought by Spring Cloud Sleuth.
It only requires to be installed (follow quick start), and - here we go:
- logs are automatically enriched with tracing context:
X-B3-TraceId
: unique ID for an overall treatment (across several services),X-B3-SpanId
: unique ID for a basic unit of work (one thread on one server),X-Span-Export
: whether or not the span is exported in the current span.
- tracing context is automatically propagated upon calling other services (through standard HTTP headers).
Enrich logs with user IDs
Logs from the woofer-webfront
component are enriched with a userId
field, that is the user's login.
This is done using the PrincipalFilter
from the orange-mathoms-logging
library (see doc).
Enrich logs with session IDs
Logs from the woofer-webfront
component are enriched with a sessionId
field, that is the current user's session ID.
This is done using the SessionIdFilter
from the orange-mathoms-logging
library (see doc).
Enrich stack traces with unique signatures
The idea is to generate a short, unique ID that identifies your stack trace.
It is an easy way to track the error from the client (UI and/or API) to your logs, count their frequency, make sure a problem has been fixed for good...
This is done using the ShortenedThrowableConverter
& StackHashJsonProvider
components from the logstash-logback-encoder
library (available from version 4.11
) (see doc).
Error management
Error management in a web application is undoubtedly a complex task. It covers several topics:
- error to http mapping
- How each Java error should be mapped to HTTP error? Which status code? Which (human readable) message? Generally speaking, you'll have to deal both with your own Java exceptions, and also errors from the underlying framework (Spring or else).
- content negotiation
- When an error occurs, depending on the requesting client, you may have to render a human readable web page, a JSON response, an XML response or else.
- integrate to your framework
- The way you will implement this logic will heavily depend on the framework you're using.
- AbstractGlobalErrorHandler: an abstract error handler that:
- globally intercepts any unhandled Spring MVC Exceptions,
- exposes a global exception rendering endpoint,
- that can be registered at the JEE container level and therefore is able to render non-Spring MVC errors (Spring Security or else),
- maps any exception (Spring MVC or else) to:
- an HTTP status,
- an error code (based on common Orange Partner error codes),
- a human-understandable error description.
- RestErrorHandler: implementation that renders errors into JSON,
- RestAndHtmlErrorHandler: an implementation that supports both JSON and html rendering (using content negotiation).
- never display a cryptic error message or ugly stack strace to the end user, but instead display a generic "so sorry, we're working on it" error message ;-)
- log the original Java error with full details to allow further analysis and maybe raise an alarm in production,
- be able to have error traceability (see below).
- generates a unique error ID,
- adds this ID to the response headers (
X-Error-Uid
), - displays a generic error message, that includes this unique ID (ex: Internal error [#d7506d00-99f2c6eb90682] occurred in request 'GET /misc/err/500'),
- logs the original Java error, with the unique error ID.
@ControllerAdvice
+@ExceptionHandler
: to globally intercept any unhandled Spring MVC exception,ErrorController
: declares the component as the controller in charge of rendering any error at the JEE container level.- Logback access
- Logback PatternLayout
- logstash-logback-encoder AccessEvent patterns
Error handlers in Woofer
The error handler is a technical component linked to the underlying framework in charge of handling Java exceptions (intercept, turn it into a HTTP error, and display it to the client in an appropriate way).
In Woofer, error handling is implemented by:
Both behave quite differently if the error is a client error or an internal (server) error.
A client error (HTTP 4XX
) is supposed to be due to a wrong usage of the API or application by the user. The error handler
simply displays some hints about the error (if you have started woofer locally, you may try this by navigating on this
link with missing parameter of
path that does not exist).
A server error (HTTP 5XX
) - on the contrary - is due to an internal issue (a bloody NullPointerException
or any technical stuff going wrong in your code), and as such needs a specific treatment:
That's what AbstractGlobalErrorHandler does in case of internal errors:
The unique error ID (displayed to the user) can then be used to retrieve the complete original stack trace, and start incident analysis: that's error traceability.
NOTE: Spring supports lots of ways of managing exceptions (probably too many). I chose to implement it with:
This is the design that suits the best my needs (render ALL errors, including non-Spring MVC, and manage content negotiation).
Shipping logs directly to Logstash
Using the logstash profile, logback is configured to ship directly application and access logs to Logstash over TCP, using the amazing logstash-logback-encoder library.
Configuration for Java logs
Logback configuration for Java logs also uses ShortenedThrowableConverter
& StackHashJsonProvider
components
(see doc)
to enrich stack traces with unique signatures.
<?xml version="1.0" encoding="UTF-8"?>
<!-- application logging configuration to ship logs directly to Logstash -->
<configuration>
<!-- define exclusion patterns as a property -->
<property name="STE_EXCLUSIONS" value="\$\$FastClassByCGLIB\$\$,\$\$EnhancerBySpringCGLIB\$\$,^sun\.reflect\..*\.invoke,^com\.sun\.,^sun\.net\.,^net\.sf\.cglib\.proxy\.MethodProxy\.invoke,^org\.springframework\.cglib\.,^org\.springframework\.transaction\.,^org\.springframework\.validation\.,^org\.springframework\.app\.,^org\.springframework\.aop\.,^java\.lang\.reflect\.Method\.invoke,^org\.springframework\.ws\..*\.invoke,^org\.springframework\.ws\.transport\.,^org\.springframework\.ws\.soap\.saaj\.SaajSoapMessage\.,^org\.springframework\.ws\.client\.core\.WebServiceTemplate\.,^org\.springframework\.web\.filter\.,^org\.springframework\.boot\.web\.filter\.,^org\.springframework\.util\.ReflectionUtils\.invokeMethod$,^org\.apache\.tomcat\.,^org\.apache\.catalina\.,^org\.apache\.coyote\.,^java\.util\.concurrent\.ThreadPoolExecutor\.runWorker,^java\.lang\.Thread\.run$,^rx\."/>
<appender name="TCP" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!-- remote Logstash server -->
<remoteHost>${LOGSTASH_HOST}</remoteHost>
<port>${LOGSTASH_PORT}</port>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<!-- computes and adds a 'stack_hash' field on errors -->
<provider class="net.logstash.logback.composite.loggingevent.StackHashJsonProvider">
<exclusions>${STE_EXCLUSIONS}</exclusions>
</provider>
<!-- enriches the stack trace with unique hash -->
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<inlineHash>true</inlineHash>
<exclusions>${STE_EXCLUSIONS}</exclusions>
</throwableConverter>
<customFields>{"@project":"${MD_PROJECT:--}","@app":"webfront","@type":"java"}</customFields>
</encoder>
</appender>
<logger name="com.orange" level="DEBUG" />
<root level="INFO">
<appender-ref ref="TCP" />
</root>
</configuration>
NOTE: The Logstash server address is configured with non-standard Spring configuration custom.logging.collector.host
and custom.logging.collector.port
.
With this configuration, a single Java log in Elasticsearch will look like this:
{
"@version": 1,
"@timestamp": "2017-03-17T14:21:25.643Z",
"host": "10.100.0.216",
"port": 46070,
"HOSTNAME": "oswewooffront",
"@app": "woofer-webfront",
"@type": "java",
"logger_name": "com.orange.oswe.demo.woofer.commons.error.RestErrorController",
"level": "ERROR",
"level_value": 40000,
"message": "Internal error [#fe8ad9ce-317d85359a8531] occurred in request 'POST /woofs'",
"thread_name": "http-nio-8080-exec-6",
"stack_trace": "#fe8ad9ce> com.netflix.hystrix.exception.HystrixRuntimeException: ...",
"stack_hash": "fe8ad9ce",
"userId": "bpitt",
"sessionId": "B3FD051F22C031B5A813B29D32EFF383",
"X-B3-TraceId": "8210b57467b85195",
"X-B3-SpanId": "8210b57467b85195",
"X-Span-Export": "false"
}
Field | Description |
---|---|
@version |
standard Elasticsearch field set by Logstash |
@timestamp |
standard Elasticsearch field set by Logback |
host |
set by Logstash |
port |
set by Logstash |
HOSTNAME |
set by Logback |
@app |
custom field set by configuration (the name of the origin microservice) |
@type |
custom field set by configuration (type of the log) |
logger_name |
set by Logback |
level |
set by Logback |
level_value |
set by Logback |
message |
set by Logback |
thread_name |
set by Logback |
stack_trace |
set by Logback, content valuated by the ShortenedThrowableConverter component |
stack_hash |
custom field set by the StackHashJsonProvider component |
userId |
custom MDC field set by the PrincipalFilter component |
sessionId |
custom MDC field set by the SessionIdFilter component |
X-B3-TraceId |
MDC field set by Spring Cloud Sleuth for logs correlation |
X-B3-SpanId |
MDC field set by Spring Cloud Sleuth for logs correlation |
X-Span-Export |
MDC field set by Spring Cloud Sleuth for logs correlation |
Configuration for access logs (embedded tomcat)
Generally speaking, Logback integration to Spring Boot is quite seamless with respect to Java logs, but it is far from the case with embedded Tomcat access logs. Spring Boot will probably improve on this aspect in the future.
In Woofer, access logs are configured by the TomcatCustomizerForLogback class by adding the LogbackValve to Tomcat's context.
Here is the logback xml configuration for access log using the logstash profile:
<?xml version="1.0" encoding="UTF-8"?>
<!-- access log configuration to ship logs directly to Logstash -->
<configuration>
<statusListener class="ch.qos.logback.core.status.OnConsoleStatusListener" />
<!-- TCP -->
<appender name="TCP" class="net.logstash.logback.appender.LogstashAccessTcpSocketAppender">
<!-- remote Logstash server -->
<remoteHost>${LOGSTASH_HOST}</remoteHost>
<port>${LOGSTASH_PORT}</port>
<encoder class="net.logstash.logback.encoder.AccessEventCompositeJsonEncoder">
<providers>
<timestamp/>
<version/>
<pattern>
<pattern>
{
"@project":"${MD_PROJECT:--}",
"@app":"webfront",
"@type":"http",
"X-B3-TraceId":"%header{X-B3-TraceId}",
"req": {
"host": "%clientHost",
"url": "%requestURL",
"meth": "%requestMethod",
"uri": "%requestURI"
},
"resp": {
"status": "#asLong{%statusCode}",
"size": "#asLong{%bytesSent}"
},
"elapsed": "#asLong{%elapsedTime}"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<appender-ref ref="TCP" />
</configuration>
NOTE: You can see that this configuration allows us to control exactly the JSON structure of an access log sent to Elasticsearch via Logstash.
Notice that the access logs also use the same custom fields as Java logs (@app
, @type
, @project
, with @type
equals to http
).
For more info about Logback & access logs, have a look at:
Logstash configuration
In order to be able to send directly logs in JSON format to Logstash, you will simply have to setup a Logstash tcp
input with json
codec as follows:
input {
tcp {
port => 4321
codec => json
}
}