Closed fit-dhairya closed 3 years ago
Hello, I'm facing an issue while passing multilevel JSON log to ES .
I'm using Winston for logging and I can log using below syntax,
logger.info({message:'this is a message', data: "this is data"});
This log can be passed to ES without any error and viewable inside Kibana. But when I pass multilevel JSON, it causes error.
//Log syntax logger.info({message:'this is a message', data: {function: "adminDashboard", file: "dashboard"}}); // Error in Kibana { "_index": "fluentd-20210623", "_type": "_doc", "_id": "-JZTOHoBZUsthc_svLUa", "_score": 1, "fields": { "record.data.file.keyword": [ "dashboard" ], "error.keyword": [ "#<Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError: 400 - Rejected by Elasticsearch>" ], "record.message": [ "this is a message" ], "record.level.keyword": [ "info" ], "error": [ "#<Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError: 400 - Rejected by Elasticsearch>" ], "message": [ "dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error=\"400 - Rejected by Elasticsearch\" location=nil tag=\"test\" time=1624442647 record={\"message\"=>\"this is a message\", \"data\"=>{\"function\"=>\"adminDashboard\", \"file\"=>\"dashboard\"}, \"level\"=>\"info\", \"module\"=>\"dashboard_logs\", \"timestamp\"=>\"2021-06-23T10:04:07.262Z\"}" ], "@log_name.keyword": [ "fluent.warn" ], "record.data.function": [ "adminDashboard" ], "@timestamp": [ "2021-06-23T10:04:08.623Z" ], "record.data.function.keyword": [ "adminDashboard" ], "record.module": [ "dashboard_logs" ], "record.level": [ "info" ], "@log_name": [ "fluent.warn" ], "tag": [ "test" ], "time": [ 1624442647 ], "record.module.keyword": [ "dashboard_logs" ], "record.message.keyword": [ "this is a message" ], "tag.keyword": [ "test" ], "record.timestamp": [ "2021-06-23T10:04:07.262Z" ], "record.data.file": [ "dashboard" ] } }
<source> @type forward port 24224 </source> <match *.**> @type copy <store> @type "elasticsearch" user "fluentd" password xxxxxx host "172.17.0.4" port 9200 logstash_format true logstash_prefix "fluentd" logstash_dateformat "%Y%m%d" include_tag_key true type_name "access_log" tag_key "@log_name" flush_interval 1s <buffer> flush_interval 1s </buffer> </store> <store> @type "stdout" </store> </match>
addressable (2.7.0) async (1.29.0) async-http (0.54.1) async-io (1.30.2) async-pool (0.3.5) aws-eventstream (1.1.1) aws-partitions (1.446.0) aws-sdk-core (3.114.0) aws-sdk-kms (1.43.0) aws-sdk-s3 (1.93.1) aws-sdk-sqs (1.38.0) aws-sigv4 (1.2.3) benchmark (default: 0.1.0) bigdecimal (default: 2.0.0) bindata (2.4.9) bundler (2.2.16, default: 2.1.4) cgi (default: 0.1.0) concurrent-ruby (1.1.8) console (1.11.1) cool.io (1.7.1) csv (default: 3.1.2) date (default: 3.0.0) delegate (default: 0.1.0) did_you_mean (default: 1.4.0) digest-crc (0.6.3) elasticsearch (7.12.0) elasticsearch-api (7.12.0) elasticsearch-transport (7.12.0) etc (default: 1.1.0) excon (0.80.1) faraday (1.4.1) faraday-excon (1.1.0) faraday-net_http (1.0.1) faraday-net_http_persistent (1.1.0) fcntl (default: 1.0.0) ffi (1.15.0) fiber-local (1.0.0) fiddle (default: 1.0.0) fileutils (1.5.0, default: 1.4.1) fluent-config-regexp-type (1.0.0) fluent-diagtool (1.0.1) fluent-logger (0.9.0) fluent-plugin-elasticsearch (5.0.3) fluent-plugin-flowcounter-simple (0.1.0) fluent-plugin-kafka (0.16.1) fluent-plugin-prometheus (1.8.5) fluent-plugin-prometheus_pushgateway (0.0.2) fluent-plugin-record-modifier (2.1.0) fluent-plugin-rewrite-tag-filter (2.4.0) fluent-plugin-s3 (1.6.0) fluent-plugin-sd-dns (0.1.0) fluent-plugin-systemd (1.0.2) fluent-plugin-td (1.1.0) fluent-plugin-utmpx (0.5.0) fluent-plugin-webhdfs (1.4.0) fluentd (1.12.3) forwardable (default: 1.3.1) getoptlong (default: 0.1.0) hirb (0.7.3) http_parser.rb (0.6.0) httpclient (2.8.3) io-console (default: 0.5.6) ipaddr (default: 1.2.2) irb (default: 1.2.6) jmespath (1.4.0) json (2.5.1, default: 2.3.0) linux-utmpx (0.3.0) logger (default: 1.4.2) ltsv (0.1.2) matrix (default: 0.2.0) mini_portile2 (2.5.0) minitest (5.13.0) msgpack (1.4.2) multi_json (1.15.0) multipart-post (2.1.1) mutex_m (default: 0.1.0) net-pop (default: 0.1.0) net-smtp (default: 0.1.0) net-telnet (0.2.0) nio4r (2.5.7) nokogiri (1.11.3 x86_64-linux) observer (default: 0.1.0) oj (3.11.5) open3 (default: 0.1.0) openssl (default: 2.1.2) ostruct (default: 0.2.0) parallel (1.20.1) power_assert (1.1.7) prime (default: 0.1.1) prometheus-client (0.9.0) protocol-hpack (1.4.2) protocol-http (0.21.0) protocol-http1 (0.13.2) protocol-http2 (0.14.2) pstore (default: 0.1.0) psych (default: 3.1.0) public_suffix (4.0.6) quantile (0.2.1) racc (1.5.2, default: 1.4.16) rake (13.0.3, 13.0.1) rdkafka (0.8.1) rdoc (default: 6.2.1) readline (default: 0.0.2) reline (default: 0.1.5) rexml (default: 3.2.3.1) rss (default: 0.2.8) ruby-kafka (1.3.0) ruby-progressbar (1.11.0) ruby2_keywords (0.0.4) rubyzip (1.3.0) sdbm (default: 1.0.0) serverengine (2.2.3) sigdump (0.2.4) singleton (default: 0.1.0) stringio (default: 0.1.0) strptime (0.2.5) strscan (default: 1.0.3) systemd-journal (1.3.3) td (0.16.9) td-client (1.0.8) td-logger (0.3.27) test-unit (3.3.4) timeout (default: 0.1.0) timers (4.3.3) tracer (default: 0.1.0) tzinfo (2.0.4) tzinfo-data (1.2021.1) uri (default: 0.10.0) webhdfs (0.9.0) webrick (1.7.0, default: 1.6.1) xmlrpc (0.3.0) yajl-ruby (1.4.1) yaml (default: 0.1.0) zip-zip (0.3) zlib (default: 1.1.0)
Issue solved, whenever we change the content type from string to complex object, we need to clear the current index, so that next time ES will take the field as complex object and not as string.
Problem
Hello, I'm facing an issue while passing multilevel JSON log to ES .
Steps to replicate
I'm using Winston for logging and I can log using below syntax,
This log can be passed to ES without any error and viewable inside Kibana. But when I pass multilevel JSON, it causes error.
Using Fluentd and ES plugin versions