This is a simple example used to extract informations from stunnel service log lines. Just create a new pipeline filtering out service:stunnel
and the host where your stunnel instance is working on (i.e. host:my-server
).
Create a new Processor and select type Grok Parser.
Add these lines under Define 1 or multiple parsing rules box:
stunnel.service.accepted_connection_from %{_syslog_timestamp} LOG%{_syslog_severity}\[%{_session_id}\]\: Service \[%{_service_name}\] accepted connection from %{_client_ip}\:%{_client_port}
stunnel.s_connect %{_syslog_timestamp} LOG%{_syslog_severity}\[%{_session_id}\]\: (s_connect|transfer)\: (connect|connected|connecting|s_poll_wait) %{_backend_ip}\:%{_backend_port}(\: %{_error_message})?
stunnel.service.connected_remote_server_from %{_syslog_timestamp} LOG%{_syslog_severity}\[%{_session_id}\]\: Service \[%{_service_name}\] connected remote server from %{_local_ip}\:%{_local_port}
stunnel.connection.closed_reset %{_syslog_timestamp} LOG%{_syslog_severity}\[%{_session_id}\]\: Connection (closed|reset)\: %{_byte_sent_to_ssl} byte\(s\) sent to SSL\, %{_byte_sent_to_socket} byte\(s\) sent to socket
stunnel.certificate.accepted %{_syslog_timestamp} LOG%{_syslog_severity}\[%{_session_id}\]\: Certificate accepted at depth\=%{_cert_depth}\: %{_cert_info}
stunnel.fallback %{_syslog_timestamp} LOG%{_syslog_severity}\[%{_session_id}\]\: %{_error_message}
Open up Advanced Settings, on Extract from: declare message
and under Helper Rules: box add these helper rules:
_backend_ip %{ipOrHost:network.backend.ip}
_backend_port %{regex("([1-9][0-9]{0,3}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5])"):network.backend.port:integer}
_byte_sent_to_socket %{integer:network.bytes_socket}
_byte_sent_to_ssl %{integer:network.bytes_ssl}
_cert_depth %{integer:stunnel.certificate.depth}
_cert_info %{data:stunnel.certificate.info:keyvalue}
_client_ip %{ipOrHost:network.client.ip}
_client_port %{regex("([1-9][0-9]{0,3}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5])"):network.client.port:integer}
_syslog_timestamp %{date("yyyy.MM.dd HH:mm:ss"):syslog.timestamp}
_local_ip %{ipOrHost:network.local.ip}
_local_port %{regex("([1-9][0-9]{0,3}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5])"):network.local.port:integer}
_syslog_severity %{integer:syslog.severity}
_session_id %{data:session_id}
_service_name %{data:stunnel.service_name}
_error_message %{data:error.message}
Now you can save the new Grok Parser Processor.
Quoting stunnel man page
debug = [FACILITY.]LEVEL debugging level
Level is one of the syslog level names or numbers emerg (0), alert (1), crit (2), err (3), warning (4), notice (5), info (6), or debug (7). All logs for the specified level and all levels numerically less than it will be shown. Use debug = debug or debug = 7 for greatest debugging output. The default is notice (5).
In my stunnel configuration I'm using debug = info
so my var syslog.severity
can be an integer number from 0 to 6. This value can be used as a new Processor called Status Remapper.
You just have to define the status attribute this way:
syslog.severity
Log Status Severity remapper
and save.
As you can see from Helper Rules we are extracting each log line date with var syslog.timestamp
.
We can use this value on a new Processor called Date Remapper this way:
syslog.timestamp
Log Date remapper
and, as always, save.
On the 3rd of September 2018, Datadog committed the stunnel service integration. You can find the documentation link here:
If you need other info or resources check out this list of links