Closed kubotat closed 11 months ago
@ashie Thanks for reviewing. I've done two items you mentioned. I will keep working on a test code.
It seems some of the check processes were failed after updating the contents. I would appreciate it if you could give me an advice on how to pass the check process.
It seems some of the check processes were failed after updating the contents. I would appreciate it if you could give me an advice on how to pass the check process.
Don't worry, they aren't caused by this change. They are always unstable, need to rerun when failed. We should tackle on this as another issue.
I will keep working on a test code.
Please let me know if it's hard to add. I'll merge & release it without test for now.
@ashie Thanks. I would appreciate it if could merge my request.
Thanks!
I've released v0.19.2
When with Forwarder and Aggregator architecture, forwarders sometimes send invalid data which cause delivery failure at aggregators.
out_rdkafka2
plugin hasdiscard_kafka_delivery_failed
but it potentially discards not only unnecessary events but also required events, e.g. users expect Fluentd to keep events in buffer files during the outage of Kafka cluster (due to maintenance for instance) but all events are discarded when withdiscard_kafka_delivery_failed
. This enhancement request proposes havingdiscard_kafka_delivery_failed_regex
option onout_rdkafka2
plugin to nullify invalid data by checking the error message with given regexp pattern.Here is a sample use case: In the following configuration, dummy events are generated with tag
test-topic0001
and Fluentd try to ship message totest-topic0001
. Iftest-topic0001
does not exist in Kafka cluster,out_rdkafka2
emits aLocal: Unknown topic (unknown_topic)
warning message. Withdiscard_kafka_delivery_failed_regex /Unknown topic/
,out_rdkafka2
discards events which emits aLocal: Unknown topic (unknown_topic)
warning message.