fluent / fluent-bit

Fast and Lightweight Logs and Metrics processor for Linux, BSD, OSX and Windows
https://fluentbit.io
Apache License 2.0
5.54k stars 1.51k forks source link

fluentbit_filter_drop_records_total metric is increasing when using multiline filter #8923

Open ashishmodi7 opened 1 month ago

ashishmodi7 commented 1 month ago

Bug Report

Describe the bug fluentbit_filter_drop_records_total metric is increasing when using multiline filter. The records are flowing properly to Splunk, but still filter drop metrics in increasing. This issue is reproducible in Fluent Bit v3.0.6

To Reproduce Steps to reproduce the problem:

  1. Deploy Fluent Bit in Kubernetes (https://docs.fluentbit.io/manual/installation/kubernetes#installing-with-helm-chart)
  2. Configure Port forwarding to view the Prometheus metrics using below command: export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=fluent-bit,app.kubernetes.io/instance=fluent-bit" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 2020:2020
  3. Configure Fluent Bit Output to Splunk Server
  4. Configure Multiline Filter using following https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/multiline-parsing
  5. Check the Prometheus Metrics "fluentbit_filter_drop_records_total". The records are flowing properly to Splunk, but still filter drop metrics in increasing. curl -s http://127.0.0.1:2020/api/v2/metrics/prometheus|grep drop

Expected behavior Prometheus Metrics "fluentbit_filter_drop_records_total" should show 0.

Screenshots image

Your Environment Version used: Fluent Bit 3.0.6 Configuration: Deafult configuration Environment name and version (e.g. Kubernetes? What version?): Kubernetes Client Version: v1.30.1 , Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 , Server Version: v1.30.0 Server type and version: Linux Operating System and version: Rocky Linux 8.9 Filters and plugins: Splunk

Additional context

patrick-stephens commented 1 month ago

I think this is a duplicate of #6699