Closed harikum closed 6 years ago
Could you paste your log without sensitive information?
fluentd_1 | 1970-01-01 00:33:38 +0000 falcologs: {"priority":"Error","rule":"Write below root","output_fields":{"evt.time":1533872031646480332,"fd.name":"/test.json","proc.cmdline":"run.sh /run.sh run","proc.name":"run.sh","proc.pname":"run.sh","user.name":"root"}}
fluentd_1 | 1970-01-01 00:33:38 +0000 falcologs: {"priority":"Informational","rule":"System user interactive","output_fields":{"evt.time":1533872031919895362,"proc.cmdline":"login ","user.name":"bin"}}
fluentd_1 | 1970-01-01 00:33:38 +0000 falcologs: {"priority":"Error","rule":"Write below binary dir","output_fields":{"evt.time":1533872033937403607,"fd.name":"/bin/created-by-event-generator-sh","proc.aname[2]":null,"proc.cmdline":"event_generator ","proc.pcmdline":null,"proc.pname":null,"user.name":"root"}}
fluentd_1 | 1970-01-01 00:33:38 +0000 falcologs: {"priority":"Notice","rule":"Change thread namespace","output_fields":{"container.id":"host","container.name":"host","evt.time":1533872034159452246,"proc.cmdline":"dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --selinux-enabled --signature-verification=false --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/rootvg-docker--pool --storage-opt dm.use_deferred_removal=true --add-registry registry.access.redhat.com --add-registry xxxxxxxxxxxx --add-registry btcs-xxxxxxxxxx:80","proc.pname":"systemd","user.name":"root"}}
fluentd_1 | 1970-01-01 00:33:38 +0000 falcologs: {"priority":"Error","rule":"Write below etc","output_fields":{"evt.time":1533872034937550130,"fd.name":"/etc/created-by-event-generator-sh","proc.aname[2]":null,"proc.aname[3]":null,"proc.aname[4]":null,"proc.cmdline":"event_generator ","proc.name":"event_generator","proc.pcmdline":null,"proc.pname":null,"user.name":"root"}}
fluentd_1 | 1970-01-01 00:33:38 +0000 falcologs: {"priority":"Error","rule":"Write below rpm database","output_fields":{"evt.time":1533872035938382404,"fd.name":"/var/lib/rpm/created-by-event-generator-sh","proc.cmdline":"event_generator ","proc.pcmdline":null,"proc.pname":null}}
fluentd_1 | 1970-01-01 00:33:38 +0000 falcologs: {"priority":"Notice","rule":"Change thread namespace","output_fields":{"container.id":"4c94c594e31a","container.name":"falco-event-generator","evt.time":1533872036939332079,"proc.cmdline":"event_generator ","proc.pname":null,"user.name":"root"}}
[2018-08-10T03:33:15,802][INFO ][o.e.n.Node ] [] initializing ...
[2018-08-10T03:33:16,007][INFO ][o.e.e.NodeEnvironment ] [ei_BNuG] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/dockervg-dockerlibvol)]], net usable_space [24.7gb], net total_space [25.5gb], spins? [possibly], types [xfs]
[2018-08-10T03:33:16,010][INFO ][o.e.e.NodeEnvironment ] [ei_BNuG] heap size [495.3mb], compressed ordinary object pointers [true]
[2018-08-10T03:33:16,012][INFO ][o.e.n.Node ] node name [ei_BNuG] derived from node ID [ei_BNuGgSQS_Xlw37kwaTQ]; set [node.name] to override
[2018-08-10T03:33:16,016][INFO ][o.e.n.Node ] version[5.6.10], pid[1], build[b727a60/2018-06-06T15:48:34.860Z], OS[Linux/3.10.0-514.16.1.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_171/25.171-b11]
[2018-08-10T03:33:16,016][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Xms512m, -Xmx512m, -Des.path.home=/usr/share/elasticsearch]
[2018-08-10T03:33:18,061][INFO ][o.e.p.PluginsService ] [ei_BNuG] loaded module [aggs-matrix-stats]
[2018-08-10T03:33:18,061][INFO ][o.e.p.PluginsService ] [ei_BNuG] loaded module [ingest-common]
[2018-08-10T03:33:18,061][INFO ][o.e.p.PluginsService ] [ei_BNuG] loaded module [lang-expression]
[2018-08-10T03:33:18,061][INFO ][o.e.p.PluginsService ] [ei_BNuG] loaded module [lang-groovy]
[2018-08-10T03:33:18,061][INFO ][o.e.p.PluginsService ] [ei_BNuG] loaded module [lang-mustache]
[2018-08-10T03:33:18,061][INFO ][o.e.p.PluginsService ] [ei_BNuG] loaded module [lang-painless]
[2018-08-10T03:33:18,061][INFO ][o.e.p.PluginsService ] [ei_BNuG] loaded module [parent-join]
[2018-08-10T03:33:18,061][INFO ][o.e.p.PluginsService ] [ei_BNuG] loaded module [percolator]
[2018-08-10T03:33:18,061][INFO ][o.e.p.PluginsService ] [ei_BNuG] loaded module [reindex]
[2018-08-10T03:33:18,061][INFO ][o.e.p.PluginsService ] [ei_BNuG] loaded module [transport-netty3]
[2018-08-10T03:33:18,061][INFO ][o.e.p.PluginsService ] [ei_BNuG] loaded module [transport-netty4]
[2018-08-10T03:33:18,062][INFO ][o.e.p.PluginsService ] [ei_BNuG] no plugins loaded
[2018-08-10T03:34:06,465][WARN ][o.e.d.r.RestController ] Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header.
[2018-08-10T03:34:11,562][WARN ][o.e.d.r.RestController ] Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header.
[2018-08-10T03:34:16,630][WARN ][o.e.d.r.RestController ] Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header.
[2018-08-10T03:34:23,357][WARN ][o.e.d.r.RestController ] Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header.
[2018-08-10T03:34:29,591][WARN ][o.e.d.r.RestController ] Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header.
[2018-08-10T03:34:33,803][WARN ][o.e.d.r.RestController ] Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header.
[2018-08-10T03:34:37,287][WARN ][o.e.d.r.RestController ] Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header.
[2018-08-10T03:34:39,399][WARN ][o.e.d.r.RestController ] Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header.
August 9th 2018, 23:34:21.000 plugin_id:object:3f99fec31ce8 elapsed_time:50.213 slow_flush_log_threshold:20 message:buffer flush took longer time than slow_flush_log_threshold: plugin_id="object:3f99fec31ce8" elapsed_time=50.213192319 slow_flush_log_threshold=20.0 @timestamp:August 9th 2018, 23:34:21.000 @log_name:fluent.warn _id:AWUh5oKheYUUA7ZCbrH0 _type:access_log _index:fluentd-20180810 _score: -
August 9th 2018, 23:34:21.000 plugin_id:object:3f99fec31ce8 message:retry succeeded. plugin_id="object:3f99fec31ce8" @timestamp:August 9th 2018, 23:34:21.000 @log_name:fluent.warn _id:AWUh5oKheYUUA7ZCbrH1 _type:access_log _index:fluentd-20180810 _score: -
August 9th 2018, 23:34:06.000 message:Connection opened to Elasticsearch cluster => {:host=>"ES_HOST", :port=>9200, :scheme=>"http"} @timestamp:August 9th 2018, 23:34:06.000 @log_name:fluent.info _id:AWUh5obSeYUUA7ZCbrH2 _type:access_log _index:fluentd-20180810 _score: -
# gem list
*** LOCAL GEMS ***
bigdecimal (1.2.8)
cool.io (1.5.3)
did_you_mean (1.0.0)
elasticsearch (1.0.18)
elasticsearch-api (1.0.18)
elasticsearch-transport (1.0.18)
excon (0.62.0)
faraday (0.15.2)
fluent-plugin-elasticsearch (1.9.2)
fluentd (0.12.43)
http_parser.rb (0.6.0)
io-console (0.4.5)
json (2.1.0, 1.8.3)
minitest (5.9.0)
msgpack (1.2.4)
multi_json (1.13.1)
multipart-post (2.0.0)
net-telnet (0.1.1)
oj (2.18.3)
power_assert (0.2.7)
psych (2.1.0)
rake (10.5.0)
rdoc (4.2.1)
sigdump (0.2.4)
string-scrub (0.0.5)
test-unit (3.1.7)
thread_safe (0.3.6)
tzinfo (1.2.5)
tzinfo-data (1.2018.5)
yajl-ruby (1.4.1)
2018-08-10 03:33:17 +0000 [info]: reading config file path="/fluentd/etc/fluent.conf"
2018-08-10 03:33:17 +0000 [info]: starting fluentd-0.12.43
2018-08-10 03:33:17 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.9.2'
2018-08-10 03:33:17 +0000 [info]: gem 'fluentd' version '0.12.43'
2018-08-10 03:33:17 +0000 [info]: adding match pattern="**" type="copy"
2018-08-10 03:33:19 +0000 [info]: adding source type="forward"
2018-08-10 03:33:19 +0000 [info]: adding source type="tail"
2018-08-10 03:33:19 +0000 [info]: using configuration file: <ROOT>
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<source>
@type tail
path /falco/falco.out
pos_file /falco/falco.out.pos
tag falcologs
read_from_head true
format json
<format>
@type json
</format>
</source>
<match **>
@type copy
<store>
@type elasticsearch
host ES_HOST
port 9200
index_name fluentd
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key @log_name
flush_interval 15s
</store>
<store>
@type stdout
</store>
</match>
</ROOT>
2018-08-10 03:33:19 +0000 [warn]: parameter '@type' in <format>
@type json
</format> is not used.
2018-08-10 03:33:19 +0000 [info]: listening fluent socket on 0.0.0.0:24224
2018-08-10 03:33:19 +0000 [info]: following tail of /falco/falco.out
1970-01-01 00:33:38 +0000 falcologs: {"priority":"Notice","rule":"Change thread namespace","output_fields":{"container.id":"host","container.name":"host","evt.time":1533775936234742079,"proc.cmdline":"dockerd-current
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/
docker-proxy-current --selinux-enabled --signature-verification=false --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/rootvg-docker--pool --storage-opt dm.use_deferred_re
moval=true --add-registry xxxxxxxxxx --add-registry xxxxxxxxx --add-registry xxxxxxxxxxx:80","proc.pname":"systemd","user.name":"root"}}
Hmm, it seems that you use old elasticsearch and its related gems. Ruby elasticsearch client gems should be same version against Elasticsearch.
If you use ES 6.x, you should use latest 6.x version of elasticsearch, elasticsearch-api, and elasticsearch-transport.
@cosmo0920 are you suggesting that i use the follwing with the existing configuration file for fluentd? Elasticsearch v6.x, fluentd:v0.12.43-debian fluent-plugin-elasticsearch: 1.9.2
Yep. fluent-plugin-elasticsearch does not specify elasticsearch client gem version. You can use arbitrary version of elasticsearch, elasticsearch-api, and elasticsearch-transport for fluent-plugin-elasticsearch 1.9.2.
@cosmo0920 sounds good, thanks. will test and update tomorrow on the status
as advised i deployed ES 6.2.4 (disabled xpack) docker container using the elasticsearch-platinum:6.2.4 image along with fluentd:v0.12.43-debian fluent-plugin-elasticsearch: 1.9.2 kibana 6.2.4 (disabled xpack, using kibana:6.2.4 docker image)
with the same fluent.conf file, fluent is is actively tailing the running falco.out file and parsing it
However I don't see any data coming into ES, expected "fluentd-*" is not getting created either, even fluent.warn is not getting through to ES
My an ES perspective, i am successfully able to write to a new index using basic CURL put
curl -X PUT "localhost:9200/logstash-2018.08.10" -H 'Content-Type: application/json' -d' { "mappings": { "log": { "properties": { "geo": { "properties": { "coordinates": { "type": "geo_point" } } } } } } }
created the index yellow open logstash-2018.08.10 mb8P5LmtTiyyemNuAO6A-w 5 1 0 0 1.1kb 1.1kb
any thoughts/suggestions?
i built a new fluentd docker image following "fluentd-kubernetes-daemonset/docker-image/v0.12/debian-elasticsearch/", it includes fluent-plugin-elasticsearch (1.17.0)
fluentd container using new image and sourcing my fluent.conf file , ES v6.2.4 (oss version) and kibana v6.2.4(oss version) docker containers were all started up.
ES created 2 new fluentd-* indexes yellow open fluentd-1970.01.01 4QP6GnBaQuuYc9M7lv0cVg 5 1 1654 0 936.6kb 936.6kb yellow open fluentd-2018.08.11 eBAlKQBXQPuonPXidXmmTA 5 1 9 0 44.1kb 44.1kb
one of the index has a timestamp suffix of 1970.01.01, perhaps based on the timestamp '1970-01-01 00:33:38' in the log that is being tailed by fluentd?
although the index size of the index 'fluentd-1970.01.01' is increasing in size, i am unable to query any data even if i set my time range to 970-01-01 00:33:38 - 970-01-01 00:35:38'
is there a way i can force fluent plugin to use the current time and anything additional i need to do ?
fluentd_1 | 1970-01-01 00:33:38 +0000 falcologs: {"priority":"Notice","rule":"Change thread namespace","output_fields":{"container.id":"host","container.name":"host","evt.time":1533964301658945662,"proc.cmdline":"dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --selinux-enabled --signature-verification=false --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/rootvg-docker--pool --storage-opt dm.use_deferred_removal=true --add-registry xxxxxxxxxxxxxxxxxx --add-registry xxxxxxxxxxxxxxxxxx --add-registry xxxxxxxxxxxxxxxxxx :80","proc.pname":"systemd","user.name":"root"}} fluentd_1 | 1970-01-01 00:33:38 +0000 falcologs: {"priority":"Notice","rule":"Change thread namespace","output_fields":{"container.id":"host","container.name":"host","evt.time":1533964306662276695,"proc.cmdline":"dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --selinux-enabled --signature-verification=false --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/rootvg-docker--pool --storage-opt dm.use_deferred_removal=true --add-registry xxxxxxxxxxxxxxxxxx --add-registry xxxxxxxxxxxxxxxxxx --add-registry xxxxxxxxxxxxxxxxxx :80","proc.pname":"systemd","user.name":"root"}} fluentd_1 | 1970-01-01 00:33:38 +0000 falcologs: {"priority":"Notice","rule":"Change thread namespace","output_fields":{"container.id":"host","container.name":"host","evt.time":1533964306662287391,"proc.cmdline":"dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --selinux-enabled --signature-verification=false --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/rootvg-docker--pool --storage-opt dm.use_deferred_removal=true --add-registry xxxxxxxxxxxxxxxxxx --add-registry xxxxxxxxxxxxxxxxxx --add-registry xxxxxxxxxxxxxxxxxx :80","proc.pname":"systemd","user.name":"root"}}
fixed the issue with tail reporting epoc time by adding these lines into the fluentd.conf
time_key time
time_format %Y-%m-%dT%H:%M:%S.%NZ
Seems to be fixed. Closing.
fluentd is garbage. full of bugs
Problem
fluentd with es plugin failing to send data to elasticsearch
fluentd tails the running log file (in json format), fluentd container has access to the files and can read the log file
docker logs of the fluentd docker container does report processing of the lines (tail), however the parsed data does not show up in kibana. Kibana only reports fluent.info logs with startup/shutdown of fluent worker.
It looks to me like the elasticsearch plugin is unable match any tags to copy/forward to ES.
Steps to replicate
can replicate at will with the existing config and versions.
Expected Behavior or What you need to ask
the file processed by fluentd using the tail plugin, is expected to be forawrded into ES and available in kibana for searching.
fluentd tails the running log file (in json format), fluentd container has access to the files, can read the log file
fluentd container logs reports the processing of the lines (tail), however the parsed data its NOT avaialble in kibana. only fluent.info logs with startup/shutdown of fluent worker is getting registered.
looks to me like the elasticsearch plugin is unable match any tag to forward to ES.
Using Fluentd and ES plugin versions
Environment: Elasticsearch, Fluentd, Kibana running as Docker containers on the same host host: RHEL 7.3 VM fluentd:v0.12.43-debian fluent-plugin-elasticsearch: 1.9.2 ES/Kibana: v6.2.4 (have tried different versions including oss, same result)
Config files
fluent.conf
docker deployment.yaml
fluentd Dockerfile