Closed gfawcett22 closed 4 years ago
I see that I could add a parser to extract the fields, but is there a way I could keep the log
field, just formatted in JSON.
but is there a way I could keep the
log
field, just formatted in JSON.
With parser plugin? There is no way to keep it.
I see that I could add a parser to extract the fields, but is there a way I could keep the
log
field, just formatted in JSON.
@gfawcett22 Do you mind sharing how you add the parser?
I see that I could add a parser to extract the fields, but is there a way I could keep the
log
field, just formatted in JSON.@gfawcett22 Do you mind sharing how you add the parser?
I did this by changing my fluent.conf
file to
<match fluent.**>
@type null
</match>
<source>
@type tail
enable_stat_watcher false
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%NZ
tag kubernetes.*
format json
read_from_head true
</source>
<source>
@type tail
enable_stat_watcher false
format /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
time_format %Y-%m-%d %H:%M:%S
path /var/log/salt/minion
pos_file /var/log/fluentd-salt.pos
tag salt
</source>
<source>
@type tail
enable_stat_watcher false
format syslog
path /var/log/startupscript.log
pos_file /var/log/fluentd-startupscript.log.pos
tag startupscript
</source>
<source>
@type tail
enable_stat_watcher false
format /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
path /var/log/docker.log
pos_file /var/log/fluentd-docker.log.pos
tag docker
</source>
<source>
@type tail
enable_stat_watcher false
format none
path /var/log/etcd.log
pos_file /var/log/fluentd-etcd.log.pos
tag etcd
</source>
<source>
@type tail
enable_stat_watcher false
format kubernetes
multiline_flush_interval 5s
path /var/log/kubelet.log
pos_file /var/log/fluentd-kubelet.log.pos
tag kubelet
</source>
<source>
@type tail
enable_stat_watcher false
format kubernetes
multiline_flush_interval 5s
path /var/log/kube-proxy.log
pos_file /var/log/fluentd-kube-proxy.log.pos
tag kube-proxy
</source>
<source>
@type tail
enable_stat_watcher false
format kubernetes
multiline_flush_interval 5s
path /var/log/kube-apiserver.log
pos_file /var/log/fluentd-kube-apiserver.log.pos
tag kube-apiserver
</source>
<source>
@type tail
enable_stat_watcher false
format kubernetes
multiline_flush_interval 5s
path /var/log/kube-controller-manager.log
pos_file /var/log/fluentd-kube-controller-manager.log.pos
tag kube-controller-manager
</source>
<source>
@type tail
enable_stat_watcher false
format kubernetes
multiline_flush_interval 5s
path /var/log/kube-scheduler.log
pos_file /var/log/fluentd-kube-scheduler.log.pos
tag kube-scheduler
</source>
<source>
@type tail
enable_stat_watcher false
format kubernetes
multiline_flush_interval 5s
path /var/log/rescheduler.log
pos_file /var/log/fluentd-rescheduler.log.pos
tag rescheduler
</source>
<source>
@type tail
enable_stat_watcher false
format kubernetes
multiline_flush_interval 5s
path /var/log/glbc.log
pos_file /var/log/fluentd-glbc.log.pos
tag glbc
</source>
<source>
@type tail
enable_stat_watcher false
format kubernetes
multiline_flush_interval 5s
path /var/log/cluster-autoscaler.log
pos_file /var/log/fluentd-cluster-autoscaler.log.pos
tag cluster-autoscaler
</source>
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
<filter kubernetes.**>
@type parser
key_name log
reserve_data true
remove_key_name_field false
emit_invalid_record_to_error false
<parse>
@type json
</parse>
</filter>
<filter **>
@type record_transformer
@id filter_containers_stream_transformer
<record>
stream_name ${tag_parts[4]}
</record>
</filter>
<match **>
@type cloudwatch_logs
log_group_name "#{ENV['LOG_GROUP_NAME']}"
log_stream_name_key stream_name
auto_create_stream true
remove_log_stream_name_key true
json_handler json
# use_tag_as_stream true
</match>
Problem
For applications that write JSON logs, I would like to be able to search and filter on properties in the log. ...
Steps to replicate
log
field is not treated as JSON.Provide example config and message
Here is what is stored in CloudWatch, notice the
log
field is treated as a string instead of JSON:Expected Behavior or What you need to ask
The
log
field to be treated as a JSON object similar to thedocker
andkubernetes
fields in the output.Using Fluentd and CloudWatchLogs plugin versions
fluentd 1.7.3