We are trying to remove the mapper conflict exceptions from retrying in fluentd elasticsearch plugin.
Tried to ignore the mapping conflicts by using this
<match kubernetes.>
@id elasticsearch
.......
ignore_exceptions [ "Elasticsearch::Transport::Transport::ServerError"]**
....
Even after we redployed the fluentd deamonset, we still see mapper exceptions being retried.
However we still the errors
2022-10-06 02:56:21 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch [error type]: mapper_parsing_exception [reason]: 'failed to parse field [source] of type [text] in document with id 'YZo4q4MBW06dMb2I7HQY'. Preview of field's value: '{file=NetworkClient.java, method=handleSuccessfulResponse, line=1100, class=org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater, classLoaderName=app}''" location=nil tag="pod-ec920a262713b2c07fce9ea960a2c3708a0390255f95e08223c2e954a4e338b0.log" time=2022-10-06 02:54:45.602936514 +0000 record={"stream"=>"stdout", "docker"=>{}, "kubernetes"=>{}, "instant"=>{"epochSecond"=>1665024885, "nanoOfSecond"=>602648000}, "thread"=>"KS.hedwig-6a948d9d-6de8-4d20-9bfe-505991fc9f83-StreamThread-2", "level"=>"WARN", "loggerName"=>"org.apache.kafka.clients.NetworkClient", "message"=>"[Consumer clientId=KS.hedwig-6a948d9d-6de8-4d20-9bfe-505991fc9f83-StreamThread-2-consumer, groupId=KS.hedwig] Error while fetching metadata with correlation id 83 : {................", "@timestamp"=>"2022-10-06T02:56:09.573204719+0000", "_hash"=>"NDkyMTk0NTAtYzBiNS00MTk2LWFiNjgtMjMzZmEwNGJmZGNi"}
Using Fluentd and ES plugin versions
OS version 1.21.14 (Kubernetes Version)
FLuend Deamonset.
Fluentd Version 1.12.0
ES plugin 4.3.3
Can you please give us an example of how not to retry for Mapper Parsing exceptions?
(check apply)
Problem
We are trying to remove the mapper conflict exceptions from retrying in fluentd elasticsearch plugin. Tried to ignore the mapping conflicts by using this <match kubernetes.> @id elasticsearch ....... ignore_exceptions [ "Elasticsearch::Transport::Transport::ServerError"]**
Even after we redployed the fluentd deamonset, we still see mapper exceptions being retried.
However we still the errors
2022-10-06 02:56:21 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch [error type]: mapper_parsing_exception [reason]: 'failed to parse field [source] of type [text] in document with id 'YZo4q4MBW06dMb2I7HQY'. Preview of field's value: '{file=NetworkClient.java, method=handleSuccessfulResponse, line=1100, class=org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater, classLoaderName=app}''" location=nil tag="pod-ec920a262713b2c07fce9ea960a2c3708a0390255f95e08223c2e954a4e338b0.log" time=2022-10-06 02:54:45.602936514 +0000 record={"stream"=>"stdout", "docker"=>{}, "kubernetes"=>{}, "instant"=>{"epochSecond"=>1665024885, "nanoOfSecond"=>602648000}, "thread"=>"KS.hedwig-6a948d9d-6de8-4d20-9bfe-505991fc9f83-StreamThread-2", "level"=>"WARN", "loggerName"=>"org.apache.kafka.clients.NetworkClient", "message"=>"[Consumer clientId=KS.hedwig-6a948d9d-6de8-4d20-9bfe-505991fc9f83-StreamThread-2-consumer, groupId=KS.hedwig] Error while fetching metadata with correlation id 83 : {................", "@timestamp"=>"2022-10-06T02:56:09.573204719+0000", "_hash"=>"NDkyMTk0NTAtYzBiNS00MTk2LWFiNjgtMjMzZmEwNGJmZGNi"}
Using Fluentd and ES plugin versions
Can you please give us an example of how not to retry for Mapper Parsing exceptions?
Thanks