Closed afonsoaugusto closed 4 years ago
I fixed the problem using hash_id_key and increase the timeout.
<filter **>
@type elasticsearch_genid
hash_id_key _hash
</filter>
<match logs**>
@type elasticsearch
host elk.example.com
port 9200
default_elasticsearch_version 6
id_key _hash # specify same key name which is specified in hash_id_key
remove_keys _hash # Elasticsearch doesn't like keys that start with _
request_timeout 20s # defaults to 5s
logstash_format true
logstash_prefix ${tag}
logstash_dateformat %Y.%m.%d
include_tag_key true
type_name app_log
tag_key @log_name
<buffer>
@type memory
flush_thread_count 2
chunk_limit_size 32MB
retry_max_interval 10
retry_max_times 3
flush_interval 5s
</buffer>
</match>
<source>
@type forward
port 5142
bind 0.0.0.0
</source>
(check apply)
Problem
Hi, I have the error using td-fluentd, when my application send information, this information is inserted 3 times in elasticsearch.
In my example I using lib python "fluent-logger==0.9.6", but when application using lib node, the same problem occur.
Steps to replicate
Using Fluentd and ES plugin versions
For instalation I using the script: https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh
My environment: