fluent / fluent-plugin-prometheus

A fluent plugin that collects metrics and exposes for Prometheus.
Apache License 2.0
258 stars 79 forks source link

alert on failed to write data into buffer by buffer overflow action=:block #104

Open GAHila opened 5 years ago

GAHila commented 5 years ago

We run pretty much everything on kubernetes/prometheus/ fluentd/elasticsearch and currently using we are

k8s.gcr.io/fluentd-elasticsearch:v2.4.0

which seems to include this plugin.

We sometimes have bursts of logs in our environments basically, something spamming the logs and it causes the fluentd output plugin that sends logs over to elasticsearch to block given our overflow_action is block (we do not want drop or throw_exception as we do not log loss). However, when buffers get full basically there is no more logs sent over and since the situation needs to be fixed by fixing the log spammer, we cannot hope to solve this by increasing the following values:

      flush_interval 
      chunk_limit_size 
      queue_limit_length

I do not see any way using this plugin to monitor to this particular scenario as the block is not complete there is some logs moving through so cannot rely on the counter of outgoing logs(_fluentd_output_status_num_recordstotal) . I also cannot rely on buffer size as it seems gauge fluctuates a lot and the number of errors does not reflect this (which is just a warning strangely enough).

However, this situation has caused us a couple of times to miss logs for days and having to manually remove big logs and it is quite a headache.

Am I missing some thing on how to alert on this scenario using this plugin?

kazegusuri commented 5 years ago

I'm not sure you want know overflow chunk limit size by sending a very large message or exceeding queue limits by slow flushing. The former case is it's difficult to find it because fluentd does not provide the metrics yet (AFAIK). The latter case is you can use prometheus_output_monitor plugin. It provides statues of each output plugin as prometheus metrics. With fluentd_output_status_buffer_queue_length you can set the threshold to alert for slow flushing.

sb1975 commented 4 years ago

We use below in buffer plugin : overflow_action drop_oldest_chunk (https://docs.fluentd.org/configuration/buffer-section ). However, we would like to know the metrics and alert in case the buffer overflow drop is happening and how many times the buffer overflow is happening in a day. This is an important problem, I hope someone picks up.

tirelibirefe commented 4 years ago

v1.0 has still the same problem. Anybody found a workaround?

nikhilagrawal577 commented 4 years ago

I am facing the same issue. Any workaround ?

cosmo0920 commented 4 years ago

From Fluentd documentation, using overflow_action block is out of scope to improve write performance:

  • overflow_action [enum: throw_exception/block/drop_oldest_chunk]
    • Default: throw_exception
    • How does output plugin behave when its buffer queue is full?
      • throw_exception: raises an exception to show the error in log
      • block: wait until buffer can store more data. After buffer is ready for storing more data, writing buffer is retried. Because of such behavior, block is suitable for processing batch execution, so do not use for improving processing throughput or performance.
    • drop_oldest_chunk: drops/purges the oldest chunk to accept newly incoming chunk

ref: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters

Using overflow_action throw_exception or overflow_action drop_oldest_chunk should be handled in this case.