rubrikinc / fluent-plugin-throttle

Rate limiting for fluentd
Apache License 2.0
41 stars 24 forks source link

After adding throttling kafka plugin not working properly #5

Open AkshayDubey29 opened 5 years ago

AkshayDubey29 commented 5 years ago

throttling config

<filter k8s_log.**> @type throttle group_key $.kubernetes.namespace_name group_bucket_period_s 60 group_bucket_limit 6000 group_reset_rate_s 100

<match .k8s_log.> @id copy_k8s_log log_level trace @type copy

@id kafka_buffered_k8s_log reserve_data true @log_level trace @type kafka_buffered brokers {brokers list} default_topic fluent_data output_include_tag true #output_include_time true required_acks 1 kafka_agg_max_bytes 10000000 kafka_agg_max_messages 1000000 max_send_limit_bytes 9000000000 #to avoid MessageSizeTooLarge get_kafka_client_log true # #@type memory #flush_mode immediate #flush_thread_count 20 #chunk_limit_size 8MB #total_limit_size 64MB #overflow_action drop_oldest_chunk # @id out_prometheus_k8s_log @type prometheus name fluentd_output_status_num_records_total type counter desc The total number of outgoing records tag ${tag} hostname ${hostname}

failed to flush the buffer. retry_time=2 next_retry_seconds=2018-09-03 07:18:49 +0000 chunk="574f2567d5ab677a1bd50654f855d45d" error_class=ArgumentError error="wrong number of arguments (given 6, expected 0)" 2018-09-03 07:18:49 +0000 [warn]: #0 suppressed same stacktrace 2018-09-03 07:18:50 +0000 [info]: #0 following tail of /applog/container/logs/json/splunk_2018-09-03.0718.log 2018-09-03 07:18:50 +0000 [trace]: #0 [kafka_buffered_k8s_log] enqueueing all chunks in buffer instance=47336657585960 2018-09-03 07:18:53 +0000 [warn]: #0 [kafka_buffered_fluent_logs] Send exception occurred: wrong number of arguments (given 6, expected 0) 2018-09-03 07:18:53 +0000 [warn]: #0 [kafka_buffered_fluent_logs] Exception Backtrace : /usr/lib/ruby/gems/2.4.0/gems/ruby-kafka-0.7.0/lib/kafka/pending_message.rb:7:in initialize' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/kafka_producer_ext.rb:16:innew' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/kafka_producer_ext.rb:16:in produce2' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/out_kafka_buffered.rb:322:inblock in write' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/event.rb:323:in each' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/event.rb:323:inblock in each' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/buffer/memory_chunk.rb:80:in open' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/buffer/memory_chunk.rb:80:inopen' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/event.rb:322:in each' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/out_kafka_buffered.rb:284:inwrite' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/compat/output.rb:131:in write' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:1094:intry_flush' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:1319:in flush_thread_run' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:439:inblock (2 levels) in start' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'