newrelic / newrelic-fluent-bit-output

A Fluent Bit output plugin that sends logs to New Relic
Apache License 2.0
26 stars 43 forks source link

Worker Support #106

Closed TheQueenIsDead closed 1 year ago

TheQueenIsDead commented 2 years ago

Hi there, can you confirm whether this plugin supports worker configuration? (multithreading)

We've run into a bug with Fluentd and the creator is recommending that extra workers are added in order to stop the process from crashing, however I do not see mention of this in the README:, context below: https://github.com/fluent/fluent-bit/issues/2661#issuecomment-1059264097

Thanks!

nijave commented 2 years ago

Looking through the code I don't see anything to handle workers/threads unless fluent-bit can just create multiple copies of the output plugin. I'd also be interested in worker support since it seems fluent-bit on our larger nodes with lots of logs can't keep up sometimes

soukichi commented 2 years ago

FYI: AWS's kinesis plugin for fluent-bit is supporting "experimental_concurrency", it works like as workers option. https://github.com/aws/amazon-kinesis-streams-for-fluent-bit

jsubirat commented 1 year ago

Hi folks, yes, the worker feature is compatible with the newrelic Fluent Bit plugin. It is a Fluent Bit-wide feature, so it's Fluent Bit who spins up multiple worker instances and that's why you didn't find anything in the plugin code itself ;-)

FlorentATo commented 1 month ago

Here's a working configuration to use workers with the plugin:

newrelic-logging:
  enabled: true
  resources:
    requests:
      cpu: 10m
      memory: 128Mi
    limits:
      memory: 128Mi
  fluentBit:
    sendMetrics: false
    k8sBufferSize: 128k # Default: 32k
    config:
      inputs: |
        [INPUT]
            Name                 tail
            Alias                pod-logs-tailer
            Tag                  kube.*
            Path                 ${PATH}
            multiline.parser     ${LOG_PARSER}
            DB                   ${FB_DB}
            Mem_Buf_Limit        16MB # Default: 7MB
            Skip_Long_Lines      On
            Refresh_Interval     10
      outputs: |
        [OUTPUT]
            Name                 newrelic
            Match                *
            Alias                newrelic-logs-forwarder
            licenseKey           ${LICENSE_KEY}
            endpoint             ${ENDPOINT}
            lowDataMode          ${LOW_DATA_MODE}
            sendMetrics          ${SEND_OUTPUT_PLUGIN_METRICS}
            Retry_Limit          ${RETRY_LIMIT}
            workers              4
(...)

You can see the workers being spawned when Fluent Bit starts:

➜  ~ k logs -f -n newrelic newrelic-bundle-newrelic-logging-bnsjl
Fluent Bit v3.0.4
* Copyright (C) 2015-2024 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

___________.__                        __    __________.__  __          ________
\_   _____/|  |  __ __   ____   _____/  |_  \______   \__|/  |_  ___  _\_____  \
 |    __)  |  | |  |  \_/ __ \ /    \   __\  |    |  _/  \   __\ \  \/ / _(__  <
 |     \   |  |_|  |  /\  ___/|   |  \  |    |    |   \  ||  |    \   / /       \
 \___  /   |____/____/  \___  >___|  /__|    |______  /__||__|     \_/ /______  /
     \/                     \/     \/               \/                        \/

[2024/09/12 17:04:42] [ info] [fluent bit] version=3.0.4, commit=7de2c45227, pid=1
[2024/09/12 17:04:42] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2024/09/12 17:04:42] [ info] [cmetrics] version=0.9.0
[2024/09/12 17:04:42] [ info] [ctraces ] version=0.5.1
[2024/09/12 17:04:42] [ info] [input:tail:pod-logs-tailer] initializing
[2024/09/12 17:04:42] [ info] [input:tail:pod-logs-tailer] storage_strategy='memory' (memory only)
[2024/09/12 17:04:42] [ info] [input:tail:pod-logs-tailer] multiline core started
[2024/09/12 17:04:42] [ info] [input:tail:pod-logs-tailer] db: delete unmonitored stale inodes from the database: count=2
[2024/09/12 17:04:42] [ info] [filter:kubernetes:kubernetes-enricher] https=1 host=kubernetes.default.svc.cluster.local port=443
[2024/09/12 17:04:42] [ info] [filter:kubernetes:kubernetes-enricher]  token updated
[2024/09/12 17:04:42] [ info] [filter:kubernetes:kubernetes-enricher] local POD info OK
[2024/09/12 17:04:42] [ info] [filter:kubernetes:kubernetes-enricher] testing connectivity with API server...
[2024/09/12 17:04:42] [ info] [filter:kubernetes:kubernetes-enricher] connectivity OK
[2024/09/12 17:04:42] [ info] [output:newrelic:newrelic-logs-forwarder] worker #0 started
[2024/09/12 17:04:42] [ info] [output:newrelic:newrelic-logs-forwarder] worker #3 started
[2024/09/12 17:04:42] [ info] [output:newrelic:newrelic-logs-forwarder] worker #1 started
[2024/09/12 17:04:42] [ info] [output:newrelic:newrelic-logs-forwarder] worker #0 started
[2024/09/12 17:04:42] [ info] [output:newrelic:newrelic-logs-forwarder] worker #2 started
[2024/09/12 17:04:42] [ info] [output:newrelic:newrelic-logs-forwarder] worker #1 started
[2024/09/12 17:04:42] [ info] [output:newrelic:newrelic-logs-forwarder] worker #3 started
[2024/09/12 17:04:42] [ info] [output:newrelic:newrelic-logs-forwarder] worker #2 started
[2024/09/12 17:04:42] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
[2024/09/12 17:04:42] [ info] [sp] stream processor started
(...)

Not sure why workers 4 spawns 8 workers though 🤔