fluent / fluent-bit

Fast and Lightweight Logs and Metrics processor for Linux, BSD, OSX and Windows
https://fluentbit.io
Apache License 2.0
5.85k stars 1.58k forks source link

How to get Fluint-bit logs are get in the Grafana without install Prometheus? #9084

Closed Ommkwn2001 closed 2 weeks ago

Ommkwn2001 commented 3 months ago

I install first of all Fluint-bit and i changes on the value.yaml of Fluint-bit.

And I install Grafana and i login in Grafana dashboard successfully and than i first of all add Data Source in Grafana.

The Data Source is "Prometheus".

And than i pass this url "http://fluent-bit.default.svc.cluster.local:2020/api/v1/metrics/prometheus" And i try with this url also "http://fluent-bit.default.svc.cluster.local:2020/api/v1/metrics".

And i choose HTTP method : POST And i click on the Save & Test And i click on "Add new dashboard" and click on "Add Visualization" and select Data Source "Prometheus".

And i try this query "rate(fluentbit_input_record_total{name="systemd."}[1m]) " but there are no any graphs are get

My Fluint-Bit value.yaml is :

`# Default values for fluent-bit.

kind -- DaemonSet or Deployment

kind: DaemonSet

replicaCount -- Only applicable if kind=Deployment

replicaCount: 1

image: repository: cr.fluentbit.io/fluent/fluent-bit tag: digest: pullPolicy: IfNotPresent

testFramework: enabled: true namespace: image: repository: busybox pullPolicy: Always tag: latest digest:

imagePullSecrets: [] nameOverride: "" fullnameOverride: ""

serviceAccount: create: true annotations: {} name:

rbac: create: true nodeAccess: false eventsAccess: false

podSecurityPolicy: create: false annotations: {}

openShift: enabled: false securityContextConstraints: create: true name: "" annotations: {} existingName: ""

podSecurityContext: {}

hostNetwork: false dnsPolicy: ClusterFirst

dnsConfig: {}

hostAliases: []

securityContext: {}

service: type: ClusterIP port: 2020 internalTrafficPolicy: loadBalancerClass: loadBalancerSourceRanges: [] labels: {} annotations: {}

serviceMonitor: enabled: false

prometheusRule: enabled: false

dashboards: enabled: false labelKey: grafana_dashboard labelValue: 1 annotations: {} namespace: ""

lifecycle: {}

livenessProbe: httpGet: path: / port: http

readinessProbe: httpGet: path: /api/v1/health port: http

resources: {}

ingress: enabled: false ingressClassName: "" annotations: {} hosts: [] extraHosts: [] tls: []

autoscaling: vpa: enabled: false annotations: {} controlledResources: [] maxAllowed: {} minAllowed: {} updatePolicy: updateMode: Auto enabled: false minReplicas: 1 maxReplicas: 3 targetCPUUtilizationPercentage: 75 behavior: {}

podDisruptionBudget: enabled: false annotations: {} maxUnavailable: "30%"

nodeSelector: {}

tolerations: []

affinity: {}

labels: {}

annotations: {}

podAnnotations: {}

podLabels: {}

minReadySeconds:

terminationGracePeriodSeconds:

priorityClassName: ""

env: []

envWithTpl: []

envFrom: []

extraContainers: []

flush: 1

metricsPort: 2020

extraPorts: []

extraVolumes: []

extraVolumeMounts: []

updateStrategy: {}

existingConfigMap: ""

networkPolicy: enabled: false

luaScripts: {}

config: service: | [SERVICE] Daemon Off Flush 1 Log_Level info Parsers_File /fluent-bit/etc/parsers.conf Parsers_File /fluent-bit/etc/conf/custom_parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port {{ .Values.metricsPort }} Health_Check On

inputs: | [INPUT] Name tail Refresh_Interval 2 Path /var/log/containers/.log
Tag kube.custommvc.

Mem_Buf_Limit 50MB Read_from_Head true Parser access_log_ltsv

[INPUT]
    Name systemd
    Tag host.*
    Systemd_Filter _SYSTEMD_UNIT=kubelet.service
    Read_From_Tail On 

filters: | [FILTER] Name kubernetes Match kube.custommvc.* Kube_URL https://kubernetes.default:443 tls.verify Off Merge_Log On Keep_Log Off K8S-Logging.Parser On K8S-Logging.Exclude On

[Filter]
    Name         throttle
    Match        kube.custommvc.* 
    Rate         5
    Window       30
    Interval     60     

outputs: | [OUTPUT] Name opensearch Match kube.custommvc.* Host opensearch-cluster-master-headless.myopensearch.svc.cluster.local Port 9200 Buffer_Size 15MB HTTP_User admin HTTP_Passwd TadhakDev01 Logstash_Format off Logstash_Prefix custmvc Trace_Error On Trace_Output On
Replace_Dots On
Retry_Limit false Index custommvc
Suppress_Type_Name on
Include_Tag_Key on tls on tls.verify off
Generate_ID on
Type _doc

volumeMounts:

daemonSetVolumes:

daemonSetVolumeMounts:

command:

args:

logLevel: info

hotReload: enabled: false image: repository: ghcr.io/jimmidyson/configmap-reload tag: v0.11.1 digest: pullPolicy: IfNotPresent resources: {}`

I want to get Fluint-bit logs are get in the Grafana with CPU usage and Memory usage graphs without install Prometheus

And when i try this url "http://fluent-bit.default.svc.cluster.local:2020/api/v1/metrics/prometheus" there are get same error like this "ReadObject: expect { or , or } or n, but found #, error found in #1 byte of ...|# HELP flue|..., bigger context ...|# HELP fluentbit_filter_add_record_total Fluentbit |... - There was an error returned querying the Prometheus API."

github-actions[bot] commented 3 weeks ago

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the exempt-stale label.

github-actions[bot] commented 2 weeks ago

This issue was closed because it has been stalled for 5 days with no activity.