Open ghost opened 2 years ago
flatten_hashes true
flatten_hashes_separator "_"
Doesn't resolve the issue.
I am having the same issue, note I have not tried the flatten_hashes option as of yet, as I don't necessarily want to retain messages that do not match the index schema. Principles and politics amongst other things
My question is can I use the option ignore_exceptions to drop logs that get a returned message of {"error":"#<Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError: 400 - Rejected by Elasticsearch [error type]: mapper_parsing_exception [reason]: 'object mapping for
I am pretty sure I can do it on Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError
but I need to be able to do it on the mapper_parsing_exception
[error type] specifically.
If I can only do it on Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError
what other 400 hundred errors do people see?
<match {logdocument.tenant.fluentd,tenant.filebeat,tenant.functionbeat,logdocument.tenant.fluentd_ssl}>
@type copy
@log_level info
<store>
@type elasticsearch
reconnect_on_error true
reload_on_failure true
reload_connections false
max_retry_putting_template 1
request_timeout 60s
fail_on_putting_template_retry_exceed false
slow_flush_log_threshold 100.0
@id out_es_logs-tenant
@log_level info
log_es_400_reason true
id_key _hash
remove_keys _hash
hosts https://es-1:9200,https://es-2:9200
user "elastic"
password "xxxxxxxxxxx"
ca_file "/etc/fluentd/ca.crt"
ssl_version TLSv1_2
ssl_verify false
index_name logs-${tenant}-fluentd
time_key time
include_timestamp true
include_tag_key true
flatten_hashes false
flatten_hashes_separator _
# Rollover index config
rollover_index true
application_name default
index_date_pattern "now/d"
deflector_alias logs-${tenant}-fluentd
# Index template
template_name logs-${tenant}-fluentd
template_file /etc/fluentd/logs-template.json
customize_template {"<<TAG>>":"${tenant}"}
template_overwrite true
<buffer tag,tenant>
retry_wait 20s
retry_exponential_backoff_base 2
retry_type exponential_backoff
retry_max_interval 300s
disable_chunk_backup true
@type file
path /fluentd/es-out-logs/
flush_thread_count 8
flush_interval 5s
flush_at_shutdown true
overflow_action block
chunk_limit_size 16M
total_limit_size 137G
retry_forever false
retry_timeout 6h
</buffer>
</match>
Sorry @ialidzhikov tagging you as you did the initial work around this here , so here's hoping
Any help is appreciated
Thanks
Sorry our versions are
fluent-plugin-elasticsearch v5.2.3
Fluentd v1.15.2
(check apply)
Problem
"400 - Rejected by Elasticsearch [error type]: mapper_parsing_exception [reason]: 'object mapping for [host] tried to parse field [host] as object, but found a concrete value'
Steps to replicate
Either clone and modify https://gist.github.com/pitr/9a518e840db58f435911
OR
Provide example config and message
Expected Behavior or What you need to ask
...
Using Fluentd and ES plugin versions
fluentd --version
ortd-agent --version
fluent-gem list
,td-agent-gem list
or your Gemfile.lock