Closed MaxDeg closed 11 months ago
This would be good, we should start with an output plugin. Is this something you can work on?
Go is not a language I know yet but this plugin would be really useful for my company. So yes I will try to build something starting from existing plugin. Any advice from which plugin I could start?
This would be most similar to the socket_writer
output, and of course we would want it similar in style to the syslog input plugin. The best way to get started though before digging into code is to design what the config file will look like and how metrics will be converted to syslog messages, then post back the details here for comment. I think most of the metric conversion would be determined by the input, since we would want to be able to forward messages with as few changes as possible.
Thanks I will do that tomorrow.
I take this use case as example to think about the fonctionality that could be needed. My base assumption is that if the input is syslog plugin no format configuration need to be done on the output plugin, only destination servers.
But if metrics comes from a different plugin we should be able to configure the formatting.
[[inputs.logparser]]
files = ["C:\\temp\\iislogs\\W3SVC1\\*.log"]
from_beginning = true
watch_method = "poll"
## Parse logstash-style "grok" patterns:
[inputs.logparser.grok]
patterns = ['%{TIMESTAMP_ISO8601:timestamp:ts-"2006-01-02 15:04:05"} %{IPORHOST:site} %{WORD:http_method} %{URIPATH:page} %{NOTSPACE:query_string} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:client_host} %{NOTSPACE:useragent} %{NOTSPACE:referrer} %{NUMBER:http_response} %{NUMBER:sub_response} %{NUMBER:sc_status} %{NUMBER:time_taken}']
## Name of the outputted measurement name.
measurement = "iis_log"
[inputs.logparser.tags]
facility = "1"
severity = "6"
[[ouputs.syslog]]
## Multiple servers can be specified, only ONE of the
## urls will be written to each interval.
servers = []
## TLS Config
# tls_allowed_cacerts = ["/etc/telegraf/ca.pem"]
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Character to prepend to SD-PARAMs (default = "_").
## A syslog message can contain multiple parameters and multiple identifiers within structured data section.
## Eg., [id1 name1="val1" name2="val2"][id2 name1="val1" nameA="valA"]
## For each unrecognised field a SD-PARAMS is created.
## Its name is created splitting field name in identifier, sdparam_separator, and parameter name.
sdparam_separator = "_"
# Define the SD-ID used for field with SD-ID in it
default_sdid = "default@32473"
## Mapping of metrics fields to syslog ones
# we should stick to the input plugin for the tag/fields by default
# timestamp -> TIMESTAMP
# |> field if present metrics timestamp otherwise
# hostname (tag) -> HOSTNAME
# facility_code + severity_code -> PRI
# |> we could fallback to tag (facility and severity) if fields are not present
# version or hardcode to "1" ? -> VERSION
# appname (tag) -> APP-NAME
# procid -> PROCID
# msgid -> MSGID
# message -> MSG
# unrecognised fields -> STRUCTURED-DATA
# |> STRUCTURED-DATA would be populated with unrecognised fields prefix with default_sdid if necessary
## Override mapping, would it be useful?
# it offer the possibility to map a value expected by the syslog plugin to another value field than the default one
[ouputs.syslog.mapping]
msgid = "83"
appname = app
This IIS log:
2019-01-12 00:00:47 127.0.0.1 GET /home - 443 - 192.168.10.2 Mozilla/5.0+(X11;+Linux+armv7l)+AppleWebKit/537.4+(KHTML,+like+Gecko)+Chrome/22.0.1229.94+Safari/537.4 https://www.google.be 500 19 3 0
Should be written as:
<14>1 2019-01-12T00:00:47+01:00 host1 dhclient - - [default@32473 site="127.0.0.1" http_method="GET" page="/home" port="443" client_host="192.168.10.2" useragent="Mozilla/5.0+(X11;+Linux+armv7l)+AppleWebKit/537.4+(KHTML,+like+Gecko)+Chrome/22.0.1229.94+Safari/537.4" referrer="https://www.google.be" http_response="500" sub_response="19" sc_status="3" time_taken="0"]
Might want to just do a single server
output, since it will simplify things quite a bit.
I would leave the mapping
table off, we should add functionality to the override
processor to set fields statically. In the future we will allow processors to be directly attached to plugins which should reduce the configuration complexity when using processors.
Not sure if we need the sdparam_separator
, thinking out loud, the incoming structured data from the syslog input could be fields like:
exampleSDID@32473_name1="val1"
exampleSDID@32473_name2="val2"
From these it would be nice if we could split them back out, [exampleSDID@32473 name1="val1" name2="val2"]
, but could we do the same for other misc fields? Would we just take the first word, in the case of non-log data: cpu_usage=42
-> [cpu usage=42]
? I feel like this would mostly work unless the fields has many words.
@leodido Would like to get your take on this as well.
Thanks for the feedback :)
Indeed we can stick to a single server. I take the example of influxdb output for the multiple ones with load balancing support which could be interresting is some scenario. But keep it simple for now.
For the mapping table indeed the overwride processor
would be a better match for this job.
I added the sdparam_separator
to be aligned with the syslog
input plugin. I enable simple forwarding if for any reason the sdparam_separator
need to be customized in input.
For the misc field my plan was to regroup them with using the default_sdid
parameter. Per RFC5424 the SD-ID must be in the format: name@<private enterprise number>
There are two formats for SD-ID names:
- Names that do not contain an at-sign ("@", ABNF %d64) are reserved to be assigned by IETF Review as described in BCP26 [RFC5226].....
- Anyone can define additional SD-IDs using names in the format name@
, e.g., "ourSDID@32473". The format of the part preceding the at-sign is not specified; ... Implementors will need to use their own private enterprise number for the enterpriseId parameter, and when creating locally extensible SD-ID names.
But following your idea, we could maybe replace default_sdid
by private_enterprise_number
. In this case, considering private_enterprise_number=32473
, cpu_usage=42
-> [cpu@32473 usage=42]
and dc=eu-west-1
-> [default@32473 dc=eu-west-1]
. If field contains many words we could take the first one for grouping.
I think the single server will be best for simplicity, I've been thinking to come up with a way to split metrics evenly across plugins. Then there would be multiple defined outputs, but a metric would be sent to only one of them, similarly to rabbitmq's consistent hashing. This would be a separate project of course but I think we can do better outside of the plugin.
It seems like we need the full default_sdid
so we can roundtrip syslog messages? The plugin could check if the field starts with this string and if so split it out.
We have more leeway on non-logging data, including data from the tail plugin or other log style plugins, and there is probably no way that is "right", we just need to pick something reasonable. Nevertheless, here is yet another strategy:
default_sdid
would be the full sdiddefault_sdid
would have the sdid trimmed and added as sdinput:
cpu,cpu=cpu0,host=loaner usage_guest=0,usage_guest_nice=0,usage_idle=100,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=0,usage_user=0 1551741666000000000
output:
<42>1 2019-01-12T00:00:47+01:00 host1 telegraf - cpu [default@32473 cpu=cpu0 usage_guest=0 usage_nice=0 etc=42]
Some thoughts.
Premise: not looked yet at the current implementation.
The STRUCTURED DATA (SD from now on) creation seems - as presented here - weak. I feel it needs a bit more reasoning.
First of all, a syslog message can contain more than one SD ELEMENT (in some cases empty, containing only the SD ID). Eg., ... [some@sdid ...][other@sdid ...][empty@sdid] ...
. In which situation this happens?
I like the default SD ID idea but it is not clear to me how you would generally map SD PARAM NAMEs to unrecognized fields in every possible case. Probably the @danielnelson proposal about this matter - and your consequent elaboration - is a better way to handle it.
Anyway please consider that the go-syslog builder ignores values not conforming to it during building of the syslog.Message
in order to only allow RFC5424-compliant instances to be generated. Thus if you have a SD PARAM NAME longer than 32 chars (%d33-126) or containing somehow =
, ]
, `,
"` it will result in a empty SD PARAM NAME, resulting in loss of information.
Nothing about the MESSAGE part of the syslog?
BTW thanks for the contribution and effort! :)
P.S.: yes VERSION should be hard-coded to be 1.
I wonder if maybe we need a SD-ID list to deal with the multiple SD ELEMENT, which would be a list of prefixes to extract from fields and use as the SDID, if they match:
[[outputs.syslog]]
default_sdid = "default@32473"
sdids = ["foo@123", "bar@456"]
input:
xyzzy,x=y foo@123_value=42,bar@456_value2=84,something_else=1
structured data only:
[foo@123 value=42][bar@456 value2=84][default@32473 something_else=1 x=y]
Hey @javicrespo as said into the PR I pushed your code into the javicrespo-syslog-output branch, could you please re-open the PR from that branch?
Thanks :)
Anyone still interested in this feature?
Hello! I am closing this issue due to inactivity. I hope you were able to resolve your problem, if not please try posting this question in our Community Slack or Community Forums or provide additional details in this issue and reqeust that it be re-opened. Thank you!
Feature Request
Telegraf added support for syslog for input recently. It would be nice to have it also as an ouput plugin or a data output format.
Proposal:
We could use your library (https://github.com/influxdata/go-syslog) which, as I could see, already handle the syslog writing.
Current behavior:
n/a
Desired behavior:
Use case: [Why is this important (helps with prioritizing requests)]
The idea is to use Telegraf to collect information of a particular machine: Event Log, Application logs, IIS logs, ... And being able to forward them to an aggregator. Currently we are using Telegraf to collect metrics on machine and we would like to reuse it also for logs.